Skip to content

Commit 7f9a5cf

Browse files
committed
Update Kafka Table Engine documentation
1 parent 5b93a7b commit 7f9a5cf

File tree

2 files changed

+47
-29
lines changed

2 files changed

+47
-29
lines changed

docs/integrations/data-ingestion/clickpipes/kafka.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ without an embedded schema id, then the specific schema ID or subject must be sp
112112
| Azure Event Hubs |<Azureeventhubssvg class="image" alt="Azure Event Hubs logo" style={{width: '3rem'}}/>|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Azure Event Hubs into ClickHouse Cloud. |
113113
| WarpStream |<Warpstreamsvg class="image" alt="WarpStream logo" style={{width: '3rem'}}/>|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from WarpStream into ClickHouse Cloud. |
114114

115-
More connectors are will get added to ClickPipes, you can find out more by [contacting us](https://clickhouse.com/company/contact?loc=clickpipes).
115+
More connectors will get added to ClickPipes in the future. You can find out more by [contacting us](https://clickhouse.com/company/contact?loc=clickpipes).
116116

117117
## Supported data formats {#supported-data-formats}
118118

docs/integrations/data-ingestion/kafka/index.md

Lines changed: 46 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -6,49 +6,67 @@ description: 'Introduction to Kafka with ClickHouse'
66
title: 'Integrating Kafka with ClickHouse'
77
---
88

9+
import Kafkasvg from '@site/static/images/integrations/logos/kafka.svg';
10+
import Confluentsvg from '@site/static/images/integrations/logos/confluent.svg';
11+
import Msksvg from '@site/static/images/integrations/logos/msk.svg';
12+
import Azureeventhubssvg from '@site/static/images/integrations/logos/azure_event_hubs.svg';
13+
import Warpstreamsvg from '@site/static/images/integrations/logos/warpstream.svg';
14+
import redpanda_logo from '@site/static/images/integrations/logos/logo_redpanda.png';
15+
import Image from '@theme/IdealImage';
16+
917
# Integrating Kafka with ClickHouse
1018

11-
[Apache Kafka](https://kafka.apache.org/) is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. In most cases involving Kafka and ClickHouse, users will wish to insert Kafka based data into ClickHouse. Below we outline several options for both use cases, identifying the pros and cons of each approach.
19+
[Apache Kafka](https://kafka.apache.org/) is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. ClickHouse provides different options to **read** data from and **write** data to Apache Kafka and other Kafka API-compatible brokers (e.g., Redpanda, Amazon MSK).
1220

1321
## Choosing an option {#choosing-an-option}
1422

15-
When integrating Kafka with ClickHouse, you will need to make early architectural decisions about the high-level approach used. We outline the most common strategies below:
23+
Choosing the right option for your use case depends on multiple factors, including your ClickHouse deployment type, data flow direction and networking requirements.
1624

17-
### ClickPipes for Kafka (ClickHouse Cloud) {#clickpipes-for-kafka-clickhouse-cloud}
18-
* [**ClickPipes**](../clickpipes/kafka.md) offers the easiest and most intuitive way to ingest data into ClickHouse Cloud. With support for Apache Kafka, Confluent Cloud and Amazon MSK today, and many more data sources coming soon.
25+
|Option | Deployment | Kafka to ClickHouse | ClickHouse to Kafka | Private Networking |
26+
|---------|------------|---------------------|---------------------|--------------------|
27+
| ClickPipes for Kafka | CH Cloud ||||
28+
| Kafka Connect Sink | CH Cloud, CH BYOC, CH OSS ||||
29+
| Kafka table engine | CH Cloud, CH BYOC, CH OSS ||||
1930

20-
### 3rd-Party Cloud-based Kafka Connectivity {#3rd-party-cloud-based-kafka-connectivity}
21-
* [**Confluent Cloud**](./confluent/index.md) - Confluent platform provides an option to upload and [run ClickHouse Connector Sink on Confluent Cloud](./confluent/custom-connector.md) or use [HTTP Sink Connector for Confluent Platform](./confluent/kafka-connect-http.md) that integrates Apache Kafka with an API via HTTP or HTTPS.
31+
For a more detailed comparison between these options, see [Choosing an approach](#choosing-an-approach).
2232

23-
* [**Amazon MSK**](./msk/index.md) - support Amazon MSK Connect framework to forward data from Apache Kafka clusters to external systems such as ClickHouse. You can install ClickHouse Kafka Connect on Amazon MSK.
33+
### ClickPipes for Kafka {#clickpipes-for-kafka}
2434

25-
* [**Redpanda Cloud**](https://cloud.redpanda.com/) - Redpanda is a Kafka API-compatible streaming data platform that can be used as an upstream data source for ClickHouse. The hosted cloud platform, Redpanda Cloud, integrates with ClickHouse over Kafka protocol, enabling real-time data ingestion for streaming analytics workloads
35+
[ClickPipes](../clickpipes.md) is the native integration engine in ClickHouse Cloud and makes ingesting massive volumes of data from a diverse set of sources as simple as clicking a few buttons. It natively supports **private network** connections (i.e., PrivateLink), scaling ingestion and cluster resources **independently**, and **comprehensive monitoring** for streaming data into ClickHouse from Apache Kafka and other Kafka API-compatible brokers.
2636

27-
### Self-managed Kafka Connectivity {#self-managed-kafka-connectivity}
28-
* [**Kafka Connect**](./kafka-clickhouse-connect-sink.md) - Kafka Connect is a free, open-source component of Apache Kafka that works as a centralized data hub for simple data integration between Kafka and other data systems. Connectors provide a simple means of scalable and reliably streaming data to and from Kafka. Source Connectors inserts data to Kafka topics from other systems, whilst Sink Connectors delivers data from Kafka topics into other data stores such as ClickHouse.
29-
* [**Vector**](./kafka-vector.md) - Vector is a vendor agnostic data pipeline. With the ability to read from Kafka, and send events to ClickHouse, this represents a robust integration option.
30-
* [**JDBC Connect Sink**](./kafka-connect-jdbc.md) - The Kafka Connect JDBC Sink connector allows you to export data from Kafka topics to any relational database with a JDBC driver
31-
* **Custom code** - Custom code using respective client libraries for Kafka and ClickHouse may be appropriate cases where custom processing of events is required. This is beyond the scope of this documentation.
32-
* [**Kafka table engine**](./kafka-table-engine.md) provides a Native ClickHouse integration (not available on ClickHouse Cloud). This table engine **pulls** data from the source system. This requires ClickHouse to have direct access to Kafka.
33-
* [**Kafka table engine with named collections**](./kafka-table-engine-named-collections.md) - Using named collections provides native ClickHouse integration with Kafka. This approach allows secure connections to multiple Kafka clusters, centralizing configuration management and improving scalability and security.
37+
| Name |Logo|Type| Status | Documentation |
38+
|----------------------|----|----|-----------------|------------------------------------------------------------------------------------------------------|
39+
| Apache Kafka |<Kafkasvg class="image" alt="Apache Kafka logo" style={{width: '3rem', 'height': '3rem'}}/>|Streaming| Stable | [ClickPipes for Kafka integration guide](../clickpipes/kafka.md) |
40+
| Confluent Cloud |<Confluentsvg class="image" alt="Confluent Cloud logo" style={{width: '3rem'}}/>|Streaming| Stable | [ClickPipes for Kafka integration guide](../clickpipes/kafka.md) |
41+
| Redpanda |<Image img={redpanda_logo} size="logo" alt="Redpanda logo"/>|Streaming| Stable | [ClickPipes for Kafka integration guide](../clickpipes/kafka.md) |
42+
| AWS MSK |<Msksvg class="image" alt="AWS MSK logo" style={{width: '3rem', 'height': '3rem'}}/>|Streaming| Stable | [ClickPipes for Kafka integration guide](../clickpipes/kafka.md) |
43+
| Azure Event Hubs |<Azureeventhubssvg class="image" alt="Azure Event Hubs logo" style={{width: '3rem'}}/>|Streaming| Stable | [ClickPipes for Kafka integration guide](../clickpipes/kafka.md) |
44+
| WarpStream |<Warpstreamsvg class="image" alt="WarpStream logo" style={{width: '3rem'}}/>|Streaming| Stable | [ClickPipes for Kafka integration guide](../clickpipes/kafka.md) |
3445

35-
### Choosing an approach {#choosing-an-approach}
36-
It comes down to a few decision points:
46+
More connectors will get added to ClickPipes in the future. You can find out more by [contacting us](https://clickhouse.com/company/contact?loc=clickpipes).
47+
48+
### Kafka Connect Sink {#kafka-connect-sink}
3749

38-
* **Connectivity** - The Kafka table engine needs to be able to pull from Kafka if ClickHouse is the destination. This requires bi-directional connectivity. If there is a network separation, e.g. ClickHouse is in the Cloud and Kafka is self-managed, you may be hesitant to remove this for compliance and security reasons. (This approach is not currently supported in ClickHouse Cloud.) The Kafka table engine utilizes resources within ClickHouse itself, utilizing threads for the consumers. Placing this resource pressure on ClickHouse may not be possible due to resource constraints, or your architects may prefer a separation of concerns. In this case, tools such as Kafka Connect, which run as a separate process and can be deployed on different hardware may be preferable. This allows the process responsible for pulling Kafka data to be scaled independently of ClickHouse.
50+
Kafka Connect is an open-source framework that works as a centralized data hub for simple data integration between Kafka and other data systems. The [ClickHouse Kafka Connect Sink](./kafka-clickhouse-connect-sink.md) connector provides a scalable and reliable way to **read** data Apache Kafka and other Kafka API-compatible brokers.
3951

40-
* **Hosting on Cloud** - Cloud vendors may set limitations on Kafka components available on their platform. Follow the guide to explore recommended options for each Cloud vendor.
52+
### Kafka Table Engine {#kafka-table-engine}
4153

42-
* **External enrichment** - Whilst messages can be manipulated before insertion into ClickHouse, through the use of functions in the select statement of the materialized view, users may prefer to move complex enrichment external to ClickHouse.
54+
The [Kafka table engine](./kafka-table-engine.md) can be used to **read** data from and **write** data to Apache Kafka and other Kafka API-compatible brokers. This engine does **not** support private network connections, which means your broker(s) must be configured for public access.
4355

44-
* **Data flow direction** - Vector only supports the transfer of data from Kafka to ClickHouse.
56+
### Choosing an approach {#choosing-an-approach}
4557

46-
## Assumptions {#assumptions}
58+
| Product | Deployment | Strengths | Weaknesses |
59+
|---------|------------|-----------|------------|
60+
| **ClickPipes for Kafka** | CH Cloud | • Native CH Cloud experience for ingesting from Kafka. Built-in monitoring and schema management • Scalable architecture that ensures high throughput and low latency • Supports private networking connections on AWS (via PrivateLink) • Supports SSL/TLS authentication (incl. mTLS) and IAM authorization • Supports programmatic configuration (Terraform, API endpoints) | • Does not support pushing data to Kafka • Does not support AWS Private Link connections to Confluent Cloud • Does not support Private Service Connect or Azure Private Link • Not available on GCP or Azure, though it can connect to services in these cloud providers • At-least-once semantics • Protobuf is not supported yet, only Avro and JSON |
61+
| **Kafka Connect Sink** | CH Cloud CH BYOC OSS CH | • Exactly-once semantics • Allows granular control over data transformation, batching and error handling • Can be deployed in private networks • Allows real-time replication from databases not yet supported in ClickPipes via Debezium | • Does not support pushing data to Kafka • Operationally complex to set up and maintain • Requires Kafka and Kafka Connect expertise |
62+
| **Kafka Table Engine** | CH Cloud CH BYOC OSS CH | • Supports pushing data to Kafka • Supports most common formats (Avro, JSON, Protobuf) • Allows real-time replication from databases not yet supported in ClickPipes via Debezium | • At-least-once semantics • Requires brokers to be exposed to a public network (IP whitelisting possible) • Limited horizontal scaling for consumers. Cannot be scaled independently from the CH server • Limited error handling and debugging information • No SSL/TLS authentication in CH Cloud • Requires Kafka expertise |
4763

48-
The user guides linked above assume the following:
64+
### Other options {#other-options}
4965

50-
* You are familiar with the Kafka fundamentals, such as producers, consumers and topics.
51-
* You have a topic prepared for these examples. We assume all data is stored in Kafka as JSON, although the principles remain the same if using Avro.
52-
* We utilise the excellent [kcat](https://github.com/edenhill/kcat) (formerly kafkacat) in our examples to publish and consume Kafka data.
53-
* Whilst we reference some python scripts for loading sample data, feel free to adapt the examples to your dataset.
54-
* You are broadly familiar with ClickHouse materialized views.
66+
* [**Confluent Cloud**](./confluent/index.md) - Confluent platform provides an option to upload and [run ClickHouse Connector Sink on Confluent Cloud](./confluent/custom-connector.md) or use [HTTP Sink Connector for Confluent Platform](./confluent/kafka-connect-http.md) that integrates Apache Kafka with an API via HTTP or HTTPS.
67+
68+
* [**Vector**](./kafka-vector.md) - Vector is a vendor agnostic data pipeline. With the ability to read from Kafka, and send events to ClickHouse, this represents a robust integration option.
69+
70+
* [**JDBC Connect Sink**](./kafka-connect-jdbc.md) - The Kafka Connect JDBC Sink connector allows you to export data from Kafka topics to any relational database with a JDBC driver.
71+
72+
* **Custom code** - Custom code using respective client libraries for Kafka and ClickHouse may be appropriate cases where custom processing of events is required. This is beyond the scope of this documentation.

0 commit comments

Comments
 (0)