You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Several factors will determine the performance and resources used in the above scenario. We recommend readers understand insert mechanics documented in detail [here](/integrations/s3/performance#using-threads-for-reads)prior to attempting to tune. In summary:
468
+
Several factors will determine the performance and resources used in the above scenario. Before attempting to tune, we recommend readers understand the insert mechanics documented in detail in the [Using Threads for Reads](/integrations/s3/performance#using-threads-for-reads)section of the [Optimizing for S3 Insert and Read Performance guide](/integrations/s3/performance). In summary:
469
469
470
470
-**Read Parallelism** - The number of threads used to read. Controlled through [`max_threads`](/operations/settings/settings#max_threads). In ClickHouse Cloud this is determined by the instance size with it defaulting to the number of vCPUs. Increasing this value may improve read performance at the expense of greater memory usage.
471
471
-**Insert Parallelism** - The number of insert threads used to insert. Controlled through [`max_insert_threads`](/operations/settings/settings#max_insert_threads). In ClickHouse Cloud this is determined by the instance size (between 2 and 4) and is set to 1 in OSS. Increasing this value may improve performance at the expense of greater memory usage.
472
472
-**Insert Block Size** - data is processed in a loop where it is pulled, parsed, and formed into in-memory insert blocks based on the [partitioning key](/engines/table-engines/mergetree-family/custom-partitioning-key). These blocks are sorted, optimized, compressed, and written to storage as new [data parts](/parts). The size of the insert block, controlled by settings [`min_insert_block_size_rows`](/operations/settings/settings#min_insert_block_size_rows) and [`min_insert_block_size_bytes`](/operations/settings/settings#min_insert_block_size_bytes) (uncompressed), impacts memory usage and disk I/O. Larger blocks use more memory but create fewer parts, reducing I/O and background merges. These settings represent minimum thresholds (whichever is reached first triggers a flush).
473
473
-**Materialized view block size** - As well as the above mechanics for the main insert, prior to insertion into materialized views, blocks are also squashed for more efficient processing. The size of these blocks is determined by the settings [`min_insert_block_size_bytes_for_materialized_views`](/operations/settings/settings#min_insert_block_size_bytes_for_materialized_views) and [`min_insert_block_size_rows_for_materialized_views`](/operations/settings/settings#min_insert_block_size_rows_for_materialized_views). Larger blocks allow more efficient processing at the expense of greater memory usage. By default, these settings revert to the values of the source table settings [`min_insert_block_size_rows`](/operations/settings/settings#min_insert_block_size_rows) and [`min_insert_block_size_bytes`](/operations/settings/settings#min_insert_block_size_bytes), respectively.
474
474
475
-
For improving performance, users can follow the guidelines outlined [here](/integrations/s3/performance#tuning-threads-and-block-size-for-inserts). It should not be necessary to also modify `min_insert_block_size_bytes_for_materialized_views` and `min_insert_block_size_rows_for_materialized_views` to improve performance in most cases. If these are modified, use the same best practices as discussed for `min_insert_block_size_rows` and `min_insert_block_size_bytes`.
475
+
For improving performance, users can follow the guidelines outlined in the [Tuning Threads and Block Size for Inserts](/integrations/s3/performance#tuning-threads-and-block-size-for-inserts) section of the [Optimizing for S3 Insert and Read Performance guide](/integrations/s3/performance). It should not be necessary to also modify `min_insert_block_size_bytes_for_materialized_views` and `min_insert_block_size_rows_for_materialized_views` to improve performance in most cases. If these are modified, use the same best practices as discussed for `min_insert_block_size_rows` and `min_insert_block_size_bytes`.
476
476
477
477
To minimize memory, users may wish to experiment with these settings. This will invariably lower performance. Using the earlier query, we show examples below.
@@ -530,13 +530,13 @@ The output for the ClickHouse client is now showing that instead of doing a full
530
530
If <ahref="https://clickhouse.com/docs/operations/server-configuration-parameters/settings/#server_configuration_parameters-logger"target="_blank">trace logging</a> is enabled then the ClickHouse server log file shows that ClickHouse was running a <ahref="https://github.com/ClickHouse/ClickHouse/blob/22.3/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp#L1452"target="_blank">binary search</a> over the 1083 UserID index marks, in order to identify granules that possibly can contain rows with a UserID column value of `749927693`. This requires 19 steps with an average time complexity of `O(log2 n)`:
531
531
```response
532
532
...Executor): Key condition: (column 0 in [749927693, 749927693])
533
-
// highlight-next-line
533
+
# highlight-next-line
534
534
...Executor): Running binary search on index range for part all_1_9_2 (1083 marks)
535
535
...Executor): Found (LEFT) boundary mark: 176
536
536
...Executor): Found (RIGHT) boundary mark: 177
537
537
...Executor): Found continuous range in 19 steps
538
538
...Executor): Selected 1/1 parts by partition key, 1 parts by primary key,
539
-
// highlight-next-line
539
+
# highlight-next-line
540
540
1/1083 marks by primary key, 1 marks to read from 1 ranges
541
541
...Reading ...approx. 8192 rows starting from 1441792
@@ -954,7 +954,7 @@ This is the resulting primary key:
954
954
That can now be used to significantly speed up the execution of our example query filtering on the URL column in order to calculate the top 10 users that most frequently clicked on the URL "http://public_search":
955
955
```sql
956
956
SELECT UserID, count(UserID) AS Count
957
-
// highlight-next-line
957
+
-- highlight-next-line
958
958
FROM hits_URL_UserID
959
959
WHERE URL ='http://public_search'
960
960
GROUP BY UserID
@@ -980,7 +980,7 @@ The response is:
980
980
└────────────┴───────┘
981
981
982
982
10 rows in set. Elapsed: 0.017 sec.
983
-
// highlight-next-line
983
+
# highlight-next-line
984
984
Processed 319.49 thousand rows,
985
985
11.38 MB (18.41 million rows/s., 655.75 MB/s.)
986
986
```
@@ -994,13 +994,13 @@ The corresponding trace log in the ClickHouse server log file confirms that:
994
994
```response
995
995
...Executor): Key condition: (column 0 in ['http://public_search',
996
996
'http://public_search'])
997
-
// highlight-next-line
997
+
# highlight-next-line
998
998
...Executor): Running binary search on index range for part all_1_9_2 (1083 marks)
999
999
...Executor): Found (LEFT) boundary mark: 644
1000
1000
...Executor): Found (RIGHT) boundary mark: 683
1001
1001
...Executor): Found continuous range in 19 steps
1002
1002
...Executor): Selected 1/1 parts by partition key, 1 parts by primary key,
1003
-
// highlight-next-line
1003
+
# highlight-next-line
1004
1004
39/1083 marks by primary key, 39 marks to read from 1 ranges
1005
1005
...Executor): Reading approx. 319488 rows with 2 streams
1006
1006
```
@@ -1045,19 +1045,19 @@ The response is:
1045
1045
└────────────────────────────────┴───────┘
1046
1046
1047
1047
10 rows in set. Elapsed: 0.024 sec.
1048
-
// highlight-next-line
1048
+
# highlight-next-line
1049
1049
Processed 8.02 million rows,
1050
1050
73.04 MB (340.26 million rows/s., 3.10 GB/s.)
1051
1051
```
1052
1052
1053
1053
Server Log:
1054
1054
```response
1055
1055
...Executor): Key condition: (column 1 in [749927693, 749927693])
1056
-
// highlight-next-line
1056
+
# highlight-next-line
1057
1057
...Executor): Used generic exclusion search over index for part all_1_9_2
1058
1058
with 1453 steps
1059
1059
...Executor): Selected 1/1 parts by partition key, 1 parts by primary key,
1060
-
// highlight-next-line
1060
+
# highlight-next-line
1061
1061
980/1083 marks by primary key, 980 marks to read from 23 ranges
1062
1062
...Executor): Reading approx. 8028160 rows with 10 streams
1063
1063
```
@@ -1108,7 +1108,7 @@ ClickHouse is storing the [column data files](#data-is-stored-on-disk-ordered-by
1108
1108
The implicitly created table (and its primary index) backing the materialized view can now be used to significantly speed up the execution of our example query filtering on the URL column:
1109
1109
```sql
1110
1110
SELECT UserID, count(UserID) AS Count
1111
-
// highlight-next-line
1111
+
-- highlight-next-line
1112
1112
FROM mv_hits_URL_UserID
1113
1113
WHERE URL ='http://public_search'
1114
1114
GROUP BY UserID
@@ -1133,7 +1133,7 @@ The response is:
1133
1133
└────────────┴───────┘
1134
1134
1135
1135
10 rows in set. Elapsed: 0.026 sec.
1136
-
// highlight-next-line
1136
+
# highlight-next-line
1137
1137
Processed 335.87 thousand rows,
1138
1138
13.54 MB (12.91 million rows/s., 520.38 MB/s.)
1139
1139
```
@@ -1145,11 +1145,11 @@ The corresponding trace log in the ClickHouse server log file confirms that Clic
1145
1145
```response
1146
1146
...Executor): Key condition: (column 0 in ['http://public_search',
1147
1147
'http://public_search'])
1148
-
// highlight-next-line
1148
+
# highlight-next-line
1149
1149
...Executor): Running binary search on index range ...
1150
1150
...
1151
1151
...Executor): Selected 4/4 parts by partition key, 4 parts by primary key,
1152
-
// highlight-next-line
1152
+
# highlight-next-line
1153
1153
41/1083 marks by primary key, 41 marks to read from 4 ranges
1154
1154
...Executor): Reading approx. 335872 rows with 4 streams
1155
1155
```
@@ -1193,7 +1193,7 @@ ClickHouse is storing the [column data files](#data-is-stored-on-disk-ordered-by
1193
1193
The hidden table (and its primary index) created by the projection can now be (implicitly) used to significantly speed up the execution of our example query filtering on the URL column. Note that the query is syntactically targeting the source table of the projection.
1194
1194
```sql
1195
1195
SELECT UserID, count(UserID) AS Count
1196
-
// highlight-next-line
1196
+
-- highlight-next-line
1197
1197
FROM hits_UserID_URL
1198
1198
WHERE URL ='http://public_search'
1199
1199
GROUP BY UserID
@@ -1218,7 +1218,7 @@ The response is:
1218
1218
└────────────┴───────┘
1219
1219
1220
1220
10 rows in set. Elapsed: 0.029 sec.
1221
-
// highlight-next-line
1221
+
# highlight-next-line
1222
1222
Processed 319.49 thousand rows, 1
1223
1223
1.38 MB (11.05 million rows/s., 393.58 MB/s.)
1224
1224
```
@@ -1231,14 +1231,14 @@ The corresponding trace log in the ClickHouse server log file confirms that Clic
1231
1231
```response
1232
1232
...Executor): Key condition: (column 0 in ['http://public_search',
1233
1233
'http://public_search'])
1234
-
// highlight-next-line
1234
+
# highlight-next-line
1235
1235
...Executor): Running binary search on index range for part prj_url_userid (1083 marks)
1236
1236
...Executor): ...
1237
-
// highlight-next-line
1237
+
# highlight-next-line
1238
1238
...Executor): Choose complete Normal projection prj_url_userid
Copy file name to clipboardExpand all lines: docs/guides/sre/user-management/ssl-user-auth.md
+5-2Lines changed: 5 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,10 +14,13 @@ import SelfManaged from '@site/docs/_snippets/_self_managed_only_no_roadmap.md';
14
14
This guide provides simple and minimal settings to configure authentication with SSL user certificates. The tutorial builds on the [Configuring SSL-TLS user guide](../configuring-ssl.md).
15
15
16
16
:::note
17
-
SSL user authentication is supported when using the `https` or native interfaces only.
18
-
It is not currently used in gRPC or PostgreSQL/MySQL emulation ports.
17
+
SSL user authentication is supported when using the `https`, `native`, `mysql`, and `postgresql` interfaces.
19
18
20
19
ClickHouse nodes need `<verificationMode>strict</verificationMode>` set for secure authentication (although `relaxed` will work for testing purposes).
20
+
21
+
If you use AWS NLB with the MySQL interface, you have to ask AWS support to enable the undocumented option:
22
+
23
+
> I would like to be able to configure our NLB proxy protocol v2 as below `proxy_protocol_v2.client_to_server.header_placement,Value=on_first_ack`.
21
24
:::
22
25
23
26
## 1. Create SSL user certificates {#1-create-ssl-user-certificates}
0 commit comments