Skip to content

Commit e81955c

Browse files
committed
fix
1 parent 5fba35e commit e81955c

File tree

1 file changed

+3
-90
lines changed

1 file changed

+3
-90
lines changed

docs/integrations/data-ingestion/clickpipes/object-storage/02_reference.md

Lines changed: 3 additions & 90 deletions
Original file line numberDiff line numberDiff line change
@@ -15,85 +15,8 @@ import S3svg from '@site/static/images/integrations/logos/amazon_s3_logo.svg';
1515
import Gcssvg from '@site/static/images/integrations/logos/gcs.svg';
1616
import DOsvg from '@site/static/images/integrations/logos/digitalocean.svg';
1717
import ABSsvg from '@site/static/images/integrations/logos/azureblobstorage.svg';
18-
import cp_step0 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step0.png';
19-
import cp_step1 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step1.png';
20-
import cp_step2_object_storage from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step2_object_storage.png';
21-
import cp_step3_object_storage from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step3_object_storage.png';
22-
import cp_step4a from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step4a.png';
23-
import cp_step4a3 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step4a3.png';
24-
import cp_step4b from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step4b.png';
25-
import cp_step5 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step5.png';
26-
import cp_success from '@site/static/images/integrations/data-ingestion/clickpipes/cp_success.png';
27-
import cp_remove from '@site/static/images/integrations/data-ingestion/clickpipes/cp_remove.png';
28-
import cp_destination from '@site/static/images/integrations/data-ingestion/clickpipes/cp_destination.png';
29-
import cp_overview from '@site/static/images/integrations/data-ingestion/clickpipes/cp_overview.png';
3018
import Image from '@theme/IdealImage';
3119

32-
# Integrating object storage with ClickHouse Cloud
33-
Object Storage ClickPipes provide a simple and resilient way to ingest data from Amazon S3, Google Cloud Storage, Azure Blob Storage, and DigitalOcean Spaces into ClickHouse Cloud. Both one-time and continuous ingestion are supported with exactly-once semantics.
34-
35-
## Prerequisite {#prerequisite}
36-
You have familiarized yourself with the [ClickPipes intro](./index.md).
37-
38-
## Creating your first ClickPipe {#creating-your-first-clickpipe}
39-
40-
1. In the cloud console, select the `Data Sources` button on the left-side menu and click on "Set up a ClickPipe"
41-
42-
<Image img={cp_step0} alt="Select imports" size="lg" border/>
43-
44-
2. Select your data source.
45-
46-
<Image img={cp_step1} alt="Select data source type" size="lg" border/>
47-
48-
3. Fill out the form by providing your ClickPipe with a name, a description (optional), your IAM role or credentials, and bucket URL. You can specify multiple files using bash-like wildcards. For more information, [see the documentation on using wildcards in path](#limitations).
49-
50-
<Image img={cp_step2_object_storage} alt="Fill out connection details" size="lg" border/>
51-
52-
4. The UI will display a list of files in the specified bucket. Select your data format (we currently support a subset of ClickHouse formats) and if you want to enable continuous ingestion [More details below](#continuous-ingest).
53-
54-
<Image img={cp_step3_object_storage} alt="Set data format and topic" size="lg" border/>
55-
56-
5. In the next step, you can select whether you want to ingest data into a new ClickHouse table or reuse an existing one. Follow the instructions in the screen to modify your table name, schema, and settings. You can see a real-time preview of your changes in the sample table at the top.
57-
58-
<Image img={cp_step4a} alt="Set table, schema, and settings" size="lg" border/>
59-
60-
You can also customize the advanced settings using the controls provided
61-
62-
<Image img={cp_step4a3} alt="Set advanced controls" size="lg" border/>
63-
64-
6. Alternatively, you can decide to ingest your data in an existing ClickHouse table. In that case, the UI will allow you to map fields from the source to the ClickHouse fields in the selected destination table.
65-
66-
<Image img={cp_step4b} alt="Use an existing table" size="lg" border/>
67-
68-
:::info
69-
You can also map [virtual columns](../../sql-reference/table-functions/s3#virtual-columns), like `_path` or `_size`, to fields.
70-
:::
71-
72-
7. Finally, you can configure permissions for the internal ClickPipes user.
73-
74-
**Permissions:** ClickPipes will create a dedicated user for writing data into a destination table. You can select a role for this internal user using a custom role or one of the predefined role:
75-
- `Full access`: with the full access to the cluster. Required if you use materialized view or Dictionary with the destination table.
76-
- `Only destination table`: with the `INSERT` permissions to the destination table only.
77-
78-
<Image img={cp_step5} alt="Permissions" size="lg" border/>
79-
80-
8. By clicking on "Complete Setup", the system will register you ClickPipe, and you'll be able to see it listed in the summary table.
81-
82-
<Image img={cp_success} alt="Success notice" size="sm" border/>
83-
84-
<Image img={cp_remove} alt="Remove notice" size="lg" border/>
85-
86-
The summary table provides controls to display sample data from the source or the destination table in ClickHouse
87-
88-
<Image img={cp_destination} alt="View destination" size="lg" border/>
89-
90-
As well as controls to remove the ClickPipe and display a summary of the ingest job.
91-
92-
<Image img={cp_overview} alt="View overview" size="lg" border/>
93-
94-
Image
95-
9. **Congratulations!** you have successfully set up your first ClickPipe. If this is a streaming ClickPipe it will be continuously running, ingesting data in real-time from your remote data source. Otherwise it will ingest the batch and complete.
96-
9720
## Supported data sources {#supported-data-sources}
9821

9922
| Name |Logo|Type| Status | Description |
@@ -135,9 +58,9 @@ To increase the throughput on large ingest jobs, we recommend scaling the ClickH
13558
- ClickPipes will only attempt to ingest objects at 10GB or smaller in size. If a file is greater than 10GB an error will be appended to the ClickPipes dedicated error table.
13659
- Azure Blob Storage pipes with continuous ingest on containers with over 100k files will have a latency of around 10–15 seconds in detecting new files. Latency increases with file count.
13760
- Object Storage ClickPipes **does not** share a listing syntax with the [S3 Table Function](/sql-reference/table-functions/s3), nor Azure with the [AzureBlobStorage Table function](/sql-reference/table-functions/azureBlobStorage).
138-
- `?` Substitutes any single character
139-
- `*` Substitutes any number of any characters except / including empty string
140-
- `**` Substitutes any number of any character include / including empty string
61+
- `?` - Substitutes any single character
62+
- `*` - Substitutes any number of any characters except / including empty string
63+
- `**` - Substitutes any number of any character include / including empty string
14164

14265
:::note
14366
This is a valid path (for S3):
@@ -180,13 +103,3 @@ Currently only protected buckets are supported for DigitalOcean spaces. You requ
180103

181104
### Azure Blob Storage {#azureblobstorage}
182105
Currently only protected buckets are supported for Azure Blob Storage. Authentication is done via a connection string, which supports access keys and shared keys. For more information, read [this guide](https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string).
183-
184-
## FAQ {#faq}
185-
186-
- **Does ClickPipes support GCS buckets prefixed with `gs://`?**
187-
188-
No. For interoperability reasons we ask you to replace your `gs://` bucket prefix with `https://storage.googleapis.com/`.
189-
190-
- **What permissions does a GCS public bucket require?**
191-
192-
`allUsers` requires appropriate role assignment. The `roles/storage.objectViewer` role must be granted at the bucket level. This role provides the `storage.objects.list` permission, which allows ClickPipes to list all objects in the bucket which is required for onboarding and ingestion. This role also includes the `storage.objects.get` permission, which is required to read or download individual objects in the bucket. See: [Google Cloud Access Control](https://cloud.google.com/storage/docs/access-control/iam-roles) for further information.

0 commit comments

Comments
 (0)