1. Memory Parameters
1.1 Memory Size Parameter for INSERT SELECT
_gbase_insert_malloc_size_limit
This parameter controls the memory allocation size in the INSERT SELECT
scenario. The default value is 10240, which is optimal. For scenarios involving long VARCHAR
fields, such as multiple VARCHAR(2000)
fields, frequent memory allocation for each row can cause high system CPU usage and impact performance. It is recommended to set this value to 5 times the maximum string length. For example, if the maximum string length is VARCHAR(2000)
, set this parameter to 10000.
1.2 Memory Cache DC Count Configuration
_gbase_dc_window_size
This parameter configures the number of Data Cache (DC) units cached in memory. The default value in version 953 is 256. In JOIN
operations, if the _gbase_dc_window_size
is set too small, exceeding the configured DC count can lead to high system CPU usage. Adjust as needed.
1.3 Data Size for Node Communication in a Cluster
These parameters generally do not require adjustment. However, during expansion, if severe memory usage and performance issues are encountered, consider adjusting _gbase_rep_receive_buffer_size
and _gbase_rep_pending_memory_size
. Use the SHOW ENGINE EXPRESS STATUS
SQL command to check the usage of _gbase_rep_receive_buffer_size
.
_gbase_rep_receive_buffer_size
Controls the data size received by a node in one transmission. The default is 20G, with a minimum setting of 5G and no upper limit, in MB.
_gbase_rep_pending_memory_size
Controls the data size sent by a node in one transmission. This parameter represents the upper limit of the receiver's buffer. Setting this value higher increases the receiver's load, and setting it lower increases the sender's load. The current recommended value is 10% of the physical memory.
_gbase_rep_receive_buffer_threshold
Sets the upper limit for data reception and transmission.
1.4 Releasing DC Units in Unlock State
_gbase_cache_drop_unlock_cell_count
This parameter indicates the number of DC units released in one go when they are in the unlock state. The default value is 1000. When the DC heap is full, this parameter controls the number of DC units cleared in each release cycle. Increasing this parameter allows more DC units to be released in one go.
_gbase_cache_drop_delay_time
This parameter indicates the time interval for executing DC release operations. The default value is 0.
1.5 Dropping Hot Data DC Units
_gbase_cache_drop_hot_data
This switch parameter allows a new DC Cache elimination management strategy, permitting the release of unlock state DC data without considering whether the data is hot (frequently used). The default value is 0, and it is recommended to set it to 1 to allow the clearance of hot data in the unlock state.
1.6 Table Cache
table_definition_cache
Caches table definitions, i.e., the content of .frm
files. The default value is 512, and it is recommended to increase it to 5120.
table_open_cache
Caches table sizes. The default value is 512, and it is recommended to increase it to 1280. These parameters prevent errors like "Prepared statement needs to be re-prepared" during stored procedure execution, especially in environments with many tables or partitioned tables.
_gbase_express_table_limit
Caches the number of tables in the express engine. When the number of cached tables exceeds this value, table metadata memory is reclaimed.
_gbase_express_table_metadata_limit
Caches the byte size of table metadata in the express engine. When this exceeds the set value, it triggers reclamation. If set to 0, the value is gbase_temp_heap_size/2
.
_gbase_express_table_clear_rate
Indicates the proportion of table metadata cleared during each cache cleanup.
1.7 Memory Heaps
gbase_heap_data
Primarily designed for caching data and should be allocated the most memory.
gbase_heap_large
Used for operator calculations.
gbase_heap_temp
Used for allocating metadata and small temporary memory blocks.
gbase_memory_pct_target
Sets the available memory ratio, defaulting to 0.8. When the combined memory allocation of the three heaps (heap_data
, heap_large
, heap_temp
) reaches 80% of the total memory, further memory requests will fail. The default ratio of the three heaps is 6:3:1
.
_gbase_memory_use_swap
Determines whether to consider swap size when calculating memory usage. Available in versions 952.43 and above, and 953.26 and above.
_gbase_memory_turn_to_heap
Indicates whether to allocate previously system-allocated memory from the heap. This feature aims to address memory limit issues but increases the risk of heap memory shortages. Available in versions 952.43 and above, and 953.26 and above.
Summary of Parameter Limits
-
Lower Limits:
gbase_heap_data >= 512MB
gbase_heap_large >= 256MB
gbase_heap_temp >= 256MB
-
Upper Limits:
(gbase_heap_data + gbase_heap_large + gbase_heap_temp) <= total memory * gbase_memory_pct_target
For versions before 952.43 and 953.26, the base is physical memory + swap. For versions 952.43 and above, and 953.26 and above, the base is physical memory (excluding swap), with an option to include swap via _gbase_memory_use_swap
.
1.8 Operator Buffers
Operator buffers (session level) are allocated from the gbase_heap_large
heap.
Operator buffers are session-level, meaning if gbase_buffer_result=1G
and the number of tasks is 30, the total buffer usage for these tasks is 30G. If the buffer is set too large in high-concurrency environments, memory shortages may occur.
Operator Buffer Parameters:
-
gbase_buffer_distgrby
: For storing intermediate results ofDISTINCT
operations. -
gbase_buffer_hgrby
: For storing intermediate results ofHASH GROUP BY
operations. -
gbase_buffer_hj
: For storing intermediate results ofHASH JOIN
operations. -
gbase_buffer_insert
: For storing intermediate results ofINSERT VALUES
. -
gbase_buffer_result
: For storing materialized intermediate results. -
gbase_buffer_rowset
: For storing join row numbers. -
gbase_buffer_sj
: For storing intermediate results ofSORT MERGE JOIN
. -
gbase_buffer_sort
: For storing intermediate results of sort operations.
Operator Buffer Settings:
- In non-high-concurrency scenarios, set the buffers based on system memory size:
-
gbase_buffer_hgrby
andgbase_buffer_hj
: Maximum 4G. -
gbase_buffer_result
: Maximum 2G. -
gbase_buffer_rowset
: Maximum 1G.
-
- In high-concurrency scenarios, avoid setting buffers too large. Generally, follow system auto-evaluation.
- For POCs, if a particular operator is slow, increase the corresponding buffer size. Ensure it doesn't affect other SQL operations and is allowed during testing.
2. Disk Writing
2.1 Disk Writing Control Parameters
_gbase_file_sync_level
Controls file sync level. The default value is 1 (ON), ensuring data is written to disk with each operation to prevent data loss or corruption. Setting it to 0 defers data writing based on _gbase_dc_sync_size
, improving performance but not recommended for production.
_gbase_dc_sync_size
Controls the data size for fsync
calls during data writing. The default is large, but 10M (10485760 bytes) is recommended. This parameter works with _gbase_file_sync_level
and is generally used for POCs, not production.
3. Kafka Consumer Configuration Parameters
3.1 Functional Parameters
Kafka consumer functionality is disabled by default. To enable real-time OLTP data synchronization via Kafka consumer, set the following parameters:
gcluster_kafka_consumer_enable=1
gcluster_lock_level=10
_gbase_transaction_disable=1
gbase_tx_log_mode=NO_USE,ONLY_SPECIFY_USE,NO_STANDARD_TRANS
Ensure these parameters are also configured on gnode
. Without these settings, executing start kafka consumer consumer_name
will result in errors.
3.2 Performance Parameters
gcluster_kafka_consume_batch
This parameter defines the number of messages read by the consumer in one batch. If the individual message size is large, this parameter should be set to a smaller value. In some versions, the default value is 10000, which can cause high memory usage in gclusterd
. It is recommended to set this parameter between 100 and 1000.
gcluster_kafka_batch_commit_dml_count
This parameter controls the batch size of DML operations for Kafka consumer commits. The default is 100000, but it is recommended to modify this to between 10000 and 20000. While increasing this value can improve real-time synchronization performance, it also increases memory usage in gclusterd
, especially when gclusterd
and gbased
share the same server, potentially affecting cluster stability. This parameter acts as a target; the program may not strictly follow it. For instance, if a single transaction contains a large number of DML operations, the consumer prioritizes transaction integrity over this parameter. Additionally, if the message reading and parsing speed from Kafka is slower than the data submission speed, the consumer will submit data based on another parameter, gcluster_kafka_user_allowed_max_latency
, instead of waiting for the batch size to meet gcluster_kafka_batch_commit_dml_count
.
For topics involving many tables (hundreds or more), this parameter should be set lower to ensure that each batch of data targets fewer tables. This needs to be adjusted according to the actual user scenario, synchronization speed, and resource usage.
gcluster_kafka_user_allowed_max_latency
This parameter configures the data submission interval or the maximum allowed latency for buffering messages. The default value is 10000 milliseconds (10 seconds). For scenarios requiring low data synchronization latency, this value can be reduced, but it is generally not recommended to set it below 1000 milliseconds (1 second).
There are two main adjustment scenarios:
(1) Low Latency Requirement
For users with strict data visibility requirements, where data changes at the source need to be visible in GBase 8a within 10 seconds. Start with the recommended parameter settings, enable consumer latency statistics, and observe the latency in the Kafka consumer stage. If user requirements are not met, reduce gcluster_kafka_user_allowed_max_latency
to 1000 milliseconds or lower. If setting it below 100 milliseconds still doesn't meet user requirements, the current GBase 8a performance cannot satisfy the need.
(2) High Throughput Requirement
For scenarios where a large batch of data is produced to Kafka, and the time needed for Kafka consumer synchronization is evaluated, increase the gcluster_kafka_user_allowed_max_latency
parameter to allow larger batches, leveraging batch advantages. Generally, if setting this to 20000 milliseconds or more still doesn't meet user requirements, the current GBase 8a performance cannot satisfy the need.
gcluster_kafka_local_queue_size
This parameter sets the length of the DML operation queue cached by the consumer. It should be set to twice the value of gcluster_kafka_batch_commit_dml_count
. Increasing this parameter does not significantly impact memory usage, so it is recommended to set it between 40000 and 80000.
gcluster_kafka_latency_time_statistics
This parameter enables consumer latency statistics to observe performance. Setting gcluster_kafka_latency_time_statistics=1
enables this feature, which records the latency data in the consumer's checkpoint table. The recorded latency includes two timestamps: the time a row of data is read by the consumer and the time it is committed by the consumer. The difference between these timestamps represents the total latency in the Kafka consumer stage. This feature slightly impacts performance but should not be entirely ignored. The checkpoint table always lacks the latency data for the last commit, as it is updated during the next commit.
Recommended Kafka Consumer Parameters Configuration
For most scenarios, the following configuration should suffice:
gcluster_kafka_consume_batch=100
gcluster_kafka_batch_commit_dml_count=1000000
gcluster_kafka_local_queue_size=2010000
gcluster_kafka_user_allowed_max_latency=5000