You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|[Airbyte](/integrations/airbyte)| An open-source data integration platform. It allows the creation of ELT data pipelines and is shipped with more than 140 out-of-the-box connectors. |
17
17
|[Apache Spark](/integrations/apache-spark)| A multi-language engine for executing data engineering, data science, and machine learning on single-node machines or clusters |
18
-
|[Apache Flink](https://github.com/ClickHouse/flink-connector-clickhouse)| Real-time data ingestion and processing into ClickHouse through Flink's DataStream API with support for batch writes |
18
+
|[Apache Flink](/integrations/apache-flink)| Real-time data ingestion and processing into ClickHouse through Flink's DataStream API with support for batch writes|
19
19
|[Amazon Glue](/integrations/glue)| A fully managed, serverless data integration service provided by Amazon Web Services (AWS) simplifying the process of discovering, preparing, and transforming data for analytics, machine learning, and application development. |
20
20
|[Artie](/integrations/artie)| A fully managed real-time data streaming platform that replicates production data into ClickHouse, unlocking customer-facing analytics, operational workflows, and Agentic AI in production. |
21
21
|[Azure Synapse](/integrations/azure-synapse)| A fully managed, cloud-based analytics service provided by Microsoft Azure, combining big data and data warehousing to simplify data integration, transformation, and analytics at scale using SQL, Apache Spark, and data pipelines. |
22
-
|[Azure Data Factory](/integrations/azure-data-factory)| A cloud-based data integration service that enables you to create, schedule, and orchestrate data workflows at scale. |
22
+
|[Azure Data Factory](/integrations/azure-data-factory)| A cloud-based data integration service that enables you to create, schedule, and orchestrate data workflows at scale. |
23
23
|[Apache Beam](/integrations/apache-beam)| An open-source, unified programming model that enables developers to define and execute both batch and stream (continuous) data processing pipelines. |
24
-
|[BladePipe](/integrations/bladepipe)| A real-time end-to-end data integration tool with sub-second latency, boosting seamless data flow across platforms. |
24
+
|[BladePipe](/integrations/bladepipe)| A real-time end-to-end data integration tool with sub-second latency, boosting seamless data flow across platforms.|
25
25
|[dbt](/integrations/dbt)| Enables analytics engineers to transform data in their warehouses by simply writing select statements. |
26
26
|[dlt](/integrations/data-ingestion/etl-tools/dlt-and-clickhouse)| An open-source library that you can add to your Python scripts to load data from various and often messy data sources into well-structured, live datasets. |
27
-
|[Estuary](/integrations/estuary)| A right-time data platform that enables millisecond-latency ETL pipelines with flexible deployment options. |
27
+
|[Estuary](/integrations/estuary)| A right-time data platform that enables millisecond-latency ETL pipelines with flexible deployment options. |
28
28
|[Fivetran](/integrations/fivetran)| An automated data movement platform moving data out of, into and across your cloud data platforms. |
29
29
|[NiFi](/integrations/nifi)| An open-source workflow management software designed to automate data flow between software systems. |
30
30
|[Vector](/integrations/vector)| A high-performance observability data pipeline that puts organizations in control of their observability data. |
0 commit comments