Skip to content

Commit dccebb2

Browse files
authored
added max_block_size tuning and explanation
1 parent 5239ef4 commit dccebb2

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

content/en/altinity-kb-setup-and-maintenance/configure_clickhouse_for_low_mem_envs.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,7 @@ TLDR;
6666
<profiles>
6767
<default>
6868
<max_threads>2</max_threads>
69+
<max_block_size>8192</max_block_size>
6970
<queue_max_wait_ms>1000</queue_max_wait_ms>
7071
<max_execution_time>600</max_execution_time>
7172
<input_format_parallel_parsing>0</input_format_parallel_parsing>
@@ -87,4 +88,4 @@ Some interesting settings to explain:
8788
- `merge_max_block_size` will reduce the number of rows per block when merging. Default is 8192 and this will reduce the memory usage of merges.
8889
- The `number_of_free_entries_in_pool` settings are very nice to tune how much concurrent merges are allowed in the queue. When there is less than specified number of free entries in pool , start to lower maximum size of merge to process (or to put in queue) or do not execute part mutations to leave free threads for regular merges . This is to allow small merges to process - not filling the pool with long running merges or multiple mutations. You can check clickhouse documentation to get more insights.
8990
- Reduce the background pools and be conservative. In a Raspi4 with 4 cores and 4 GB or ram, background pool should be not bigger than the number of cores and even less if possible.
90-
- Tune some profile settings to enable disk spilling (`max_bytes_before_external_group_by` and `max_bytes_before_external_sort`) and reduce the number of threads per query plus enable queuing of queries (`queue_max_wait_ms`) if the `max_concurrent_queries` limit is exceeded.
91+
- Tune some profile settings to enable disk spilling (`max_bytes_before_external_group_by` and `max_bytes_before_external_sort`) and reduce the number of threads per query plus enable queuing of queries (`queue_max_wait_ms`) if the `max_concurrent_queries` limit is exceeded. Also `max_block_size` is not usually touched but in this case we can lower it ro reduce RAM usage.

0 commit comments

Comments
 (0)