SQL as fast as NoSQL, Bulk Loads, Covering and Partial Indexes

Franck Pachot - Sep 6 - - Dev Community

The two query APIs in YugabyteDB are YSQL, which is compatible with PostgreSQL, and YCQL, which is compatible with Cassandra. It's important to note that the choice between these two APIs does not imply any specific SQL vs. NoSQL behavior, as both enforce ACID properties.

  • YCQL can be used with Cassandra applications and drivers, providing better performance, consistency, and features than Cassandra.
  • YSQL can be utilized for relational applications and also supports adding documents (PostgreSQL JSON and JSONB) and text search (with pg_trgm extension), which utilizes Generalized Inverted Index (GIN) indexes to accelerate searches on semi-structured formats.

The distributed storage system uses a key-value representation internally for read and write operations, and transaction control. However, this is not directly related to NoSQL databases, as there's a query layer with a query planner and a catalog built on top of it, to provide SQL and ACID. It is possible to get the raw performance of this distributed storage for special operations that do not require all ACID properties.

Tuning the ACID properties in SQL

Not all ACID (Atomicity, Consistency, Isolation, Durability) properties are required in some situations. For instance, during initial bulk load, migrating from another database, or IoT data ingestion, you might be okay with immediate visibility from other sessions while writing. Additionally, you may not need to enforce integrity constraints if the data is already validated in the source system, and you may not require the ability to roll back if it's easier to truncate and reload the data.

Here are some possibilities (⚠️ they disable some ACID properties, so don't use them without understanding what they do in your use case):

  • set yb_disable_transactional_writes to on to auto-commit each write operation and bypass provisional records. This affects Atomicity and Isolation.
  • set yb_enable_upsert_mode and session_replication_role to 'replica' to disable unique and foreign key checks. The first one provides a UPSERT behavior for tables with a primary key but no secondary indexes. The second one, inherited from Postgres, disables triggers and referential integrity when the loaded data is considered to have already been validated in the original system. They affect Consistency.
  • Set Replication Factor 3 after the load, but use Replication Factor 1 during the load to avoid waiting for Raft consensus. You can use tablespace placement blocks for this: load when the table and indexes are in an RF1 tablespace and set the RF3 tablespace when done. Automatic re-balancing will create the replicas in the background. This affects Durability until everything is rebalanced to RF3.

Fast ingest for IoT with YSQL and YCQL

Here is an independent benchmark for IoT ingest showing that YugabyteDB has a similar throughput with the PostgreSQL API and the Cassandra-like API:

Database performance comparison for IoT use cases

MaibornWolff does a lot of IoT projects. One important part of IoT projects is storage and analysis of device data/events. To get some reliable numbers to help us choose well suited databases for specific use cases, we have started to compare some horizontally scalable databases in regards to how well they handle typical IoT workloads and tasks. We are doing this test for us but maybe it's of interest to other people as well. This repository contains the results and the tooling for this comparison.

Currently we have two types of tests: Rapid and concurrent inserts to simulate data ingestion of device events (to keep it simple we simulate temperature events) as well as some queries that are typical in IoT projects. More query tests are planned but not yet implemented. For the inserts we are doing batch inserts as we assume that…

Image description

For this workload, YugabyteDB is the fastest distributed SQL. PostgreSQL, Timescale, and InfluxDB are faster because they are deployed as a single pod, not protected from failures. YugabyteDB was tested with a Replication Factor 3

Advanced Indexing Strategies

When considering performance, it's crucial to ensure that indexes are designed correctly. YugabyteDB stores data in Log-Structure Merged (LSM) trees rather than traditional heap tables and B-Trees.

  • After sharding on the key, the rows are stored in the primary key LSM Tree.
  • Secondary index entries are stored in their own LSM Tree, sharded on their indexed columns, with an internal reference to the primary key for index scans that require reading columns from the table, and additional included columns to avoid reading the table.

LSM Trees have lower write amplification than B-trees, enabling the creation of more indexes. Since YugabyteDB is PostgreSQL compatible, it allows for fine-tuning the indexing strategy with covering and partial indexes, reducing cross-shard network calls.

Below is an example of an index that covers the WHERE clause, using a partial index and a prefixed key, the ORDER BY clause, and including the columns used by the SELECT clause. This ensures the query execution involves a single seek operation into the LSM Tree, fetching the next 100 rows in one sequential read.

yugabyte=# create index on emp ( deptno, hiredate desc)
           include (empno) 
           where job !='MANAGER'
;
CREATE INDEX

yugabyte=# explain (analyze, dist, debug, verbose, costs off, summary off)
select empno, hiredate from emp
 where deptno=30 and job !='MANAGER'
 order by hiredate desc limit 100
;
                                                  QUERY PLAN
--------------------------------------------------------------------------------------------------------------
 Limit (actual time=0.739..0.789 rows=100 loops=1)
   Output: empno, hiredate
   ->  Index Only Scan using emp_deptno_hiredate_empno_idx on emp (actual time=0.737..0.771 rows=100 loops=1)
         Output: empno, hiredate
         Index Cond: (deptno = 30)
         Heap Fetches: 0
         Storage Index Read Requests: 1
         Storage Index Read Execution Time: 0.631 ms
         Storage Index Rows Scanned: 100
         Metric rocksdb_number_db_seek: 1.000
         Metric rocksdb_number_db_next: 101.000
         Metric rocksdb_number_db_seek_found: 1.000
         Metric rocksdb_number_db_next_found: 101.000
         Metric rocksdb_iter_bytes_read: 5702.000
         Metric docdb_keys_found: 101.000
         Metric ql_read_latency: sum: 79.000, count: 1.000
(14 rows)

Enter fullscreen mode Exit fullscreen mode

Numerous metrics are added with additional options to explain analyze: distributed read requests with dist and LSM Tree navigation with debug. It is recommended that such an execution plan be looked at to understand the time complexity of an SQL statement.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player