Skip to content

Commit e2bf7fb

Browse files
committed
tweaks
1 parent 3562390 commit e2bf7fb

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/integrations/data-ingestion/clickpipes/kafka/04_best_practices.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -137,11 +137,11 @@ Below are some informal benchmarks for ClickPipes for Kafka that can be used get
137137

138138
Benchmark details:
139139
- We used production ClickHouse Cloud services with enough resources to ensure that throughput was not bottlenecked by the insert processing on the ClickHouse side.
140-
- The ClickHouse Cloud service, the Kafka cluster (Confluent Cloud), and the ClickPipe were all running in the same region (us-east-2).
141-
- The ClickPipe was configured with a single consumer.
140+
- The ClickHouse Cloud service, the Kafka cluster (Confluent Cloud), and the ClickPipe were all running in the same region (`us-east-2`).
141+
- The ClickPipe was configured with a single replica.
142142
- The sample data included nested data with a mix of uuid, string, and integer datatypes.
143143
- Other datatypes, such as floats may be less performant.
144-
- There was no appreciable differene in performance between compressed and uncompressed data.
144+
- There was no appreciable difference in performance between compressed and uncompressed data.
145145

146146

147147
| Replica Size | Message Size | Data Format | Throughput |

0 commit comments

Comments
 (0)