Skip to content

Commit 3621c92

Browse files
authored
Merge pull request #3408 from Blargian/move_images_to_static
move images to static
2 parents 487d2cf + ab78796 commit 3621c92

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

48 files changed

+80
-55
lines changed

docs/integrations/data-ingestion/clickpipes/postgres/index.md

Lines changed: 14 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,12 @@ slug: /integrations/clickpipes/postgres
77
import BetaBadge from '@theme/badges/BetaBadge';
88
import cp_service from '@site/static/images/integrations/data-ingestion/clickpipes/cp_service.png';
99
import cp_step0 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step0.png';
10-
10+
import postgres_tile from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/postgres-tile.jpg'
11+
import postgres_connection_details from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/postgres-connection-details.jpg'
12+
import ssh_tunnel from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/ssh-tunnel.jpg'
13+
import select_replication_slot from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/select-replication-slot.jpg'
14+
import select_destination_db from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/select-destination-db.jpg'
15+
import ch_permissions from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/ch-permissions.jpg'
1116

1217
# Ingesting Data from Postgres to ClickHouse (using CDC)
1318

@@ -63,7 +68,7 @@ Make sure you are logged in to your ClickHouse Cloud account. If you don't have
6368

6469
3. Select the `Postgres CDC` tile
6570

66-
![Select Postgres](./images/postgres-tile.jpg)
71+
<img src={postgres_tile} alt="Select Postgres" />
6772

6873
### Adding your source Postgres database connection {#adding-your-source-postgres-database-connection}
6974

@@ -76,16 +81,17 @@ Make sure you are logged in to your ClickHouse Cloud account. If you don't have
7681

7782
:::
7883

79-
![Fill in connection details](./images/postgres-connection-details.jpg)
84+
<img src={postgres_connection_details} alt="Fill in connection details" />
8085

8186
#### (Optional) Setting up SSH Tunneling {#optional-setting-up-ssh-tunneling}
8287

8388
You can specify SSH tunneling details if your source Postgres database is not publicly accessible.
8489

90+
8591
1. Enable the "Use SSH Tunnelling" toggle.
8692
2. Fill in the SSH connection details.
8793

88-
![SSH tunneling](./images/ssh-tunnel.jpg)
94+
<img src={ssh_tunnel} alt="SSH tunneling" />
8995

9096
3. To use Key-based authentication, click on "Revoke and generate key pair" to generate a new key pair and copy the generated public key to your SSH server under `~/.ssh/authorized_keys`.
9197
4. Click on "Verify Connection" to verify the connection.
@@ -102,7 +108,7 @@ Once the connection details are filled in, click on "Next".
102108

103109
5. Make sure to select the replication slot from the dropdown list you created in the prerequisites step.
104110

105-
![Select replication slot](./images/select-replication-slot.jpg)
111+
<img src={select_replication_slot} alt="Select replication slot" />
106112

107113
#### Advanced Settings {#advanced-settings}
108114

@@ -119,8 +125,8 @@ You can configure the Advanced settings if needed. A brief description of each s
119125

120126
6. Here you can select the destination database for your ClickPipe. You can either select an existing database or create a new one.
121127

122-
![Select destination database](./images/select-destination-db.jpg)
123-
128+
<img src={select_destination_db} alt="Select destination database" />
129+
124130
7. You can select the tables you want to replicate from the source Postgres database. While selecting the tables, you can also choose to rename the tables in the destination ClickHouse database as well as exclude specific columns.
125131

126132
:::warning
@@ -133,8 +139,7 @@ You can configure the Advanced settings if needed. A brief description of each s
133139

134140
8. Select the "Full access" role from the permissions dropdown and click "Complete Setup".
135141

136-
![Review permissions](./images/ch-permissions.jpg)
137-
142+
<img src={ch_permissions} alt="Review permissions" />
138143

139144
## What's next? {#whats-next}
140145

docs/integrations/data-ingestion/clickpipes/postgres/source/crunchy-postgres.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,9 @@ description: Set up Crunchy Bridge Postgres as a source for ClickPipes
44
slug: /integrations/clickpipes/postgres/source/crunchy-postgres
55
---
66

7+
import firewall_rules_crunchy_bridge from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/crunchy-postgres/firewall_rules_crunchy_bridge.png'
8+
import add_firewall_rules_crunchy_bridge from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/crunchy-postgres/add_firewall_rules_crunchy_bridge.png'
9+
710
# Crunchy Bridge Postgres Source Setup Guide
811

912

@@ -53,10 +56,9 @@ Connect to your Crunchy Bridge Postgres through the `postgres` user and run the
5356
5457
Safelist [ClickPipes IPs](../../index.md#list-of-static-ips) by adding the Firewall Rules in Crunchy Bridge.
5558
56-
![Where to find Firewall Rules in Crunchy Bridge?](images/setup/crunchy-postgres/firewall_rules_crunchy_bridge.png)
57-
58-
![Add the Firewall Rules for ClickPipes](images/setup/crunchy-postgres/add_firewall_rules_crunchy_bridge.png)
59+
<img src={firewall_rules_crunchy_bridge} alt="Where to find Firewall Rules in Crunchy Bridge?"/>
5960
61+
<img src={add_firewall_rules_crunchy_bridge} alt="Add the Firewall Rules for ClickPipes"/>
6062
6163
6264

docs/integrations/data-ingestion/clickpipes/postgres/source/neon-postgres.md

Lines changed: 12 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -4,19 +4,25 @@ description: Set up Neon Postgres instance as a source for ClickPipes
44
slug: /integrations/clickpipes/postgres/source/neon-postgres
55
---
66

7+
import neon_commands from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/neon-postgres/neon-commands.png'
8+
import neon_enable_replication from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/neon-postgres/neon-enable-replication.png'
9+
import neon_enabled_replication from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/neon-postgres/neon-enabled-replication.png'
10+
import neon_ip_allow from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/neon-postgres/neon-ip-allow.png'
11+
import neon_conn_details from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/neon-postgres/neon-conn-details.png'
12+
713
# Neon Postgres Source Setup Guide
814

915
This is a guide on how to setup Neon Postgres, which you can use for replication in ClickPipes.
1016
Make sure you're signed in to your [Neon console](https://console.neon.tech/app/projects) for this setup.
1117

12-
1318
## Creating a user with permissions {#creating-a-user-with-permissions}
1419

1520
Let's create a new user for ClickPipes with the necessary permissions suitable for CDC,
1621
and also create a publication that we'll use for replication.
1722

1823
For this, you can head over to the **SQL Console** tab.
1924
Here, we can run the following SQL commands:
25+
2026
```sql
2127
CREATE USER clickpipes_user PASSWORD 'clickpipes_password';
2228
GRANT USAGE ON SCHEMA "public" TO clickpipes_user;
@@ -30,21 +36,19 @@ Here, we can run the following SQL commands:
3036
CREATE PUBLICATION clickpipes_publication FOR ALL TABLES;
3137
```
3238

33-
![User and publication commands](images/setup/neon-postgres/neon-commands.png)
34-
39+
<img src={neon_commands} alt="User and publication commands"/>
3540

3641
Click on **Run** to have a publication and a user ready.
3742

3843
## Enable Logical Replication {#enable-logical-replication}
3944
In Neon, you can enable logical replication through the UI. This is necessary for ClickPipes's CDC to replicate data.
4045
Head over to the **Settings** tab and then to the **Logical Replication** section.
4146

42-
![Enable logical replication](images/setup/neon-postgres/neon-enable-replication.png)
47+
<img src={neon_enable_replication} alt="Enable logical replication"/>
4348

4449
Click on **Enable** to be all set here. You should see the below success message once you enable it.
4550

46-
![Logical replication enabled](images/setup/neon-postgres/neon-enabled-replication.png)
47-
51+
<img src={neon_enabled_replication} alt="Logical replication enabled"/>
4852

4953
Let's verify the below settings in your Neon Postgres instance:
5054
```sql
@@ -53,24 +57,18 @@ SHOW max_wal_senders; -- should be 10
5357
SHOW max_replication_slots; -- should be 10
5458
```
5559

56-
5760
## IP Whitelisting (For Neon Enterprise plan) {#ip-whitelisting-for-neon-enterprise-plan}
5861
If you have Neon Enterprise plan, you can whitelist the [ClickPipes IPs](../../index.md#list-of-static-ips) to allow replication from ClickPipes to your Neon Postgres instance.
5962
To do this you can click on the **Settings** tab and go to the **IP Allow** section.
6063

61-
![Allow IPs screen](images/setup/neon-postgres/neon-ip-allow.png)
62-
64+
<img src={neon_ip_allow} alt="Allow IPs screen"/>
6365

6466
## Copy Connection Details {#copy-connection-details}
6567
Now that we have the user, publication ready and replication enabled, we can copy the connection details to create a new ClickPipe.
6668
Head over to the **Dashboard** and at the text box where it shows the connection string,
6769
change the view to **Parameters Only**. We will need these parameters for our next step.
6870

69-
![Connection details](images/setup/neon-postgres/neon-conn-details.png)
70-
71-
72-
73-
71+
<img src={neon_conn_details} alt="Connection details"/>
7472

7573
## What's next? {#whats-next}
7674

docs/integrations/data-ingestion/clickpipes/postgres/source/supabase.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,9 @@ description: Set up Supabase instance as a source for ClickPipes
44
slug: /integrations/clickpipes/postgres/source/supabase
55
---
66

7+
import supabase_commands from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/supabase/supabase-commands.jpg'
8+
import supabase_connection_details from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/supabase/supabase-connection-details.jpg'
9+
710
# Supabase Source Setup Guide
811

912
This is a guide on how to setup Supabase Postgres for usage in ClickPipes.
@@ -35,8 +38,7 @@ Here, we can run the following SQL commands:
3538
CREATE PUBLICATION clickpipes_publication FOR ALL TABLES;
3639
```
3740

38-
![User and publication commands](images/setup/supabase/supabase-commands.jpg)
39-
41+
<img src={supabase_commands} alt="User and publication commands"/>
4042

4143

4244
Click on **Run** to have a publication and a user ready.
@@ -69,8 +71,7 @@ Head over to your Supabase Project's `Project Settings` -> `Database` (under `Co
6971

7072
**Important**: Disable `Display connection pooler` on this page and head over to the `Connection parameters` section and note/copy the parameters.
7173

72-
![Locate Supabase Connection Details](images/setup/supabase/supabase-connection-details.jpg)
73-
74+
<img src={supabase_connection_details} alt="Locate Supabase Connection Details"/>
7475

7576
:::info
7677

docs/integrations/data-ingestion/dbms/dynamodb/index.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,9 @@ keywords: [clickhouse, DynamoDB, connect, integrate, table]
88

99
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
1010
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
11+
import dynamodb_kinesis_stream from '@site/static/images/integrations/data-ingestion/dbms/dynamodb/dynamodb-kinesis-stream.png';
12+
import dynamodb_s3_export from '@site/static/images/integrations/data-ingestion/dbms/dynamodb/dynamodb-s3-export.png';
13+
import dynamodb_map_columns from '@site/static/images/integrations/data-ingestion/dbms/dynamodb/dynamodb-map-columns.png';
1114

1215
# CDC from DynamoDB to ClickHouse
1316

@@ -27,14 +30,14 @@ Data will be ingested into a `ReplacingMergeTree`. This table engine is commonly
2730
First, you will want to enable a Kinesis stream on your DynamoDB table to capture changes in real-time. We want to do this before we create the snapshot to avoid missing any data.
2831
Find the AWS guide located [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/kds.html).
2932

30-
![DynamoDB Kinesis Stream](../images/dynamodb-kinesis-stream.png)
33+
<img src={dynamodb_kinesis_stream} alt="DynamoDB Kinesis Stream"/>
3134

3235
## 2. Create the snapshot {#2-create-the-snapshot}
3336

3437
Next, we will create a snapshot of the DynamoDB table. This can be achieved through an AWS export to S3. Find the AWS guide located [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/S3DataExport.HowItWorks.html).
3538
**You will want to do a "Full export" in the DynamoDB JSON format.**
3639

37-
![DynamoDB S3 Export](../images/dynamodb-s3-export.png)
40+
<img src={dynamodb_s3_export} alt="DynamoDB S3 Export"/>
3841

3942
## 3. Load the snapshot into ClickHouse {#3-load-the-snapshot-into-clickhouse}
4043

@@ -124,8 +127,7 @@ Now we can set up the Kinesis ClickPipe to capture real-time changes from the Ki
124127
- `ApproximateCreationDateTime`: `version`
125128
- Map other fields to the appropriate destination columns as shown below
126129

127-
![DynamoDB Map Columns](../images/dynamodb-map-columns.png)
128-
130+
<img src={dynamodb_map_columns} alt="DynamoDB Map Columns"/>
129131

130132
## 5. Cleanup (optional) {#5-cleanup-optional}
131133

docs/integrations/data-ingestion/dbms/jdbc-with-clickhouse.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,9 @@ description: The ClickHouse JDBC Bridge allows ClickHouse to access data from an
88

99
import Tabs from '@theme/Tabs';
1010
import TabItem from '@theme/TabItem';
11+
import Jdbc01 from '@site/static/images/integrations/data-ingestion/dbms/jdbc-01.png';
12+
import Jdbc02 from '@site/static/images/integrations/data-ingestion/dbms/jdbc-02.png';
13+
import Jdbc03 from '@site/static/images/integrations/data-ingestion/dbms/jdbc-03.png';
1114

1215
# Connecting ClickHouse to external data sources with JDBC
1316

@@ -16,7 +19,7 @@ Using JDBC requires the ClickHouse JDBC bridge, so you will need to use `clickho
1619
:::
1720

1821
**Overview:** The <a href="https://github.com/ClickHouse/clickhouse-jdbc-bridge" target="_blank">ClickHouse JDBC Bridge</a> in combination with the [jdbc table function](/sql-reference/table-functions/jdbc.md) or the [JDBC table engine](/engines/table-engines/integrations/jdbc.md) allows ClickHouse to access data from any external data source for which a <a href="https://en.wikipedia.org/wiki/JDBC_driver" target="_blank">JDBC driver</a> is available:
19-
<img src={require('./images/jdbc-01.png').default} class="image" alt="ClickHouse JDBC Bridge"/>
22+
<img src={Jdbc01} class="image" alt="ClickHouse JDBC Bridge"/>
2023
This is handy when there is no native built-in [integration engine](/engines/table-engines/index.md#integration-engines-integration-engines), table function, or external dictionary for the external data source available, but a JDBC driver for the data source exists.
2124

2225
You can use the ClickHouse JDBC Bridge for both reads and writes. And in parallel for multiple external data sources, e.g. you can run distributed queries on ClickHouse across multiple external and internal data sources in real time.
@@ -36,7 +39,7 @@ You have access to a machine that has:
3639

3740
## Install the ClickHouse JDBC Bridge locally {#install-the-clickhouse-jdbc-bridge-locally}
3841

39-
The easiest way to use the ClickHouse JDBC Bridge is to install and run it on the same host where also ClickHouse is running:<img src={require('./images/jdbc-02.png').default} class="image" alt="ClickHouse JDBC Bridge locally"/>
42+
The easiest way to use the ClickHouse JDBC Bridge is to install and run it on the same host where also ClickHouse is running:<img src={Jdbc02} class="image" alt="ClickHouse JDBC Bridge locally"/>
4043

4144
Let's start by connecting to the Unix shell on the machine where ClickHouse is running and create a local folder where we will later install the ClickHouse JDBC Bridge into (feel free to name the folder anything you like and put it anywhere you like):
4245
```bash
@@ -137,7 +140,7 @@ As the first parameter for the jdbc table function we are using the name of the
137140
## Install the ClickHouse JDBC Bridge externally {#install-the-clickhouse-jdbc-bridge-externally}
138141

139142
For a distributed ClickHouse cluster (a cluster with more than one ClickHouse host) it makes sense to install and run the ClickHouse JDBC Bridge externally on its own host:
140-
<img src={require('./images/jdbc-03.png').default} class="image" alt="ClickHouse JDBC Bridge externally"/>
143+
<img src={Jdbc03} class="image" alt="ClickHouse JDBC Bridge externally"/>
141144
This has the advantage that each ClickHouse host can access the JDBC Bridge. Otherwise the JDBC Bridge would need to be installed locally for each ClickHouse instance that is supposed to access external data sources via the Bridge.
142145

143146
In order to install the ClickHouse JDBC Bridge externally, we do the following steps:

docs/integrations/data-ingestion/dbms/postgresql/postgres-vs-clickhouse.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,8 @@ title: Comparing PostgreSQL and ClickHouse
44
keywords: [postgres, postgresql, comparison]
55
---
66

7+
import postgresReplicas from '@site/static/images/integrations/data-ingestion/dbms/postgres-replicas.png';
8+
79
## Postgres vs ClickHouse: Equivalent and different concepts {#postgres-vs-clickhouse-equivalent-and-different-concepts}
810

911
Users coming from OLTP systems who are used to ACID transactions should be aware that ClickHouse makes deliberate compromises in not fully providing these in exchange for performance. ClickHouse semantics can deliver high durability guarantees and high write throughput if well understood. We highlight some key concepts below that users should be familiar with prior to working with ClickHouse from Postgres.
@@ -32,7 +34,7 @@ The replication process in ClickHouse (1) starts when data is inserted into any
3234

3335
<br />
3436

35-
<img src={require('../images/postgres-replicas.png').default}
37+
<img src={postgresReplicas}
3638
class="image"
3739
alt="NEEDS ALT"
3840
style={{width: '500px'}} />

docs/integrations/data-ingestion/etl-tools/dbt/index.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,8 @@ pip install dbt-clickhouse
6767

6868
dbt excels when modeling highly relational data. For the purposes of example, we provide a small IMDB dataset with the following relational schema. This dataset originates from the[ relational dataset repository](https://relational.fit.cvut.cz/dataset/IMDb). This is trivial relative to common schemas used with dbt but represents a manageable sample:
6969

70-
<img src={dbt_01} class="image" alt="IMDB table schema" style={{width: '100%'}} />
70+
71+
<img src={dbt_01} class="image" alt="IMDB table schema" style={{width: '100%'}}/>
7172

7273
We use a subset of these tables as shown.
7374

docs/integrations/data-ingestion/google-dataflow/templates.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,6 @@ description: Users can ingest data into ClickHouse using Google Dataflow Templat
99

1010
Google Dataflow templates provide a convenient way to execute prebuilt, ready-to-use data pipelines without the need to write custom code. These templates are designed to simplify common data processing tasks and are built using [Apache Beam](https://beam.apache.org/), leveraging connectors like `ClickHouseIO` for seamless integration with ClickHouse databases. By running these templates on Google Dataflow, you can achieve highly scalable, distributed data processing with minimal effort.
1111

12-
13-
14-
1512
## Why Use Dataflow Templates? {#why-use-dataflow-templates}
1613

1714
- **Ease of Use**: Templates eliminate the need for coding by offering preconfigured pipelines tailored to specific use cases.

docs/integrations/data-ingestion/google-dataflow/templates/bigquery-to-clickhouse.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ description: Users can ingest data from BigQuery into ClickHouse using Google Da
66
---
77

88
import TOCInline from '@theme/TOCInline';
9+
import dataflow_inqueue_job from '@site/static/images/integrations/data-ingestion/google-dataflow/dataflow-inqueue-job.png'
910

1011
# Dataflow BigQuery to ClickHouse template
1112

@@ -136,8 +137,7 @@ job:
136137
Navigate to the [Dataflow Jobs tab](https://console.cloud.google.com/dataflow/jobs) in your Google Cloud Console to
137138
monitor the status of the job. You’ll find the job details, including progress and any errors:
138139

139-
<img src={require('../images/dataflow-inqueue-job.png').default} class="image" alt="DataFlow running job"
140-
style={{width: '100%', 'background-color': 'transparent'}}/>
140+
<img src={dataflow_inqueue_job} class="image" alt="DataFlow running job" style={{width: '100%', 'background-color': 'transparent'}}/>
141141

142142
## Troubleshooting {#troubleshooting}
143143

0 commit comments

Comments
 (0)