You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/integrations/data-ingestion/clickpipes/kafka/04_best_practices.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -137,11 +137,11 @@ Below are some informal benchmarks for ClickPipes for Kafka that can be used get
137
137
138
138
Benchmark details:
139
139
- We used production ClickHouse Cloud services with enough resources to ensure that throughput was not bottlenecked by the insert processing on the ClickHouse side.
140
-
- The ClickHouse Cloud service, the Kafka cluster (Confluent Cloud), and the ClickPipe were all running in the same region (us-east-2).
141
-
- The ClickPipe was configured with a single consumer.
140
+
- The ClickHouse Cloud service, the Kafka cluster (Confluent Cloud), and the ClickPipe were all running in the same region (`us-east-2`).
141
+
- The ClickPipe was configured with a single replica.
142
142
- The sample data included nested data with a mix of uuid, string, and integer datatypes.
143
143
- Other datatypes, such as floats may be less performant.
144
-
- There was no appreciable differene in performance between compressed and uncompressed data.
144
+
- There was no appreciable difference in performance between compressed and uncompressed data.
145
145
146
146
147
147
| Replica Size | Message Size | Data Format | Throughput |
0 commit comments