Skip to content

Commit 18af98d

Browse files
committed
Remove reference to archive tables
1 parent 37f786b commit 18af98d

File tree

3 files changed

+5
-13
lines changed

3 files changed

+5
-13
lines changed

docs/integrations/data-ingestion/clickpipes/object-storage/amazon-s3/01_overview.md

Lines changed: 1 addition & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -115,17 +115,9 @@ Object Storage ClickPipes follow the POSIX standard for file pattern matching. A
115115

116116
Various types of failures can occur when ingesting large dataset, which can result in a partial inserts or duplicate data. Object Storage ClickPipes are resilient to insert failures and provides exactly-once semantics. This is accomplished by using temporary "staging" tables. Data is first inserted into the staging tables. If something goes wrong with this insert, the staging table can be truncated and the insert can be retried from a clean state. Only when an insert is completed and successful, the partitions in the staging table are moved to target table. To read more about this strategy, check-out [this blog post](https://clickhouse.com/blog/supercharge-your-clickhouse-data-loads-part3).
117117

118-
### Archive table {#archive-table}
119-
120-
ClickPipes will create a table next to your destination table with the postfix `s3_clickpipe_<clickpipe_id>_archive`. This table will contain a list of all the files that have been ingested by the ClickPipe. This table is used to track files during ingestion and can be used to verify files have been ingested. The archive table has a [TTL](/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-ttl) of 7 days.
121-
122-
:::note
123-
These tables will not be visible using ClickHouse Cloud SQL Console, you will need to connect via an external client either using HTTPS or Native connection to read them.
124-
:::
125-
126118
### Virtual columns {#virtual-columns}
127119

128-
To track which files have been ingested, incaddlude the `_file` virtual column to the column mapping list. The `_file` virtual column contains the filename of the source object, which can be used to query which files have been processed.
120+
To track which files have been ingested, include the `_file` virtual column to the column mapping list. The `_file` virtual column contains the filename of the source object, which can be used to query which files have been processed.
129121

130122
## Access control {#access-control}
131123

docs/integrations/data-ingestion/clickpipes/object-storage/azure-blob-storage/01_overview.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,9 @@ Object Storage ClickPipes follow the POSIX standard for file pattern matching. A
7474

7575
Various types of failures can occur when ingesting large dataset, which can result in a partial inserts or duplicate data. Object Storage ClickPipes are resilient to insert failures and provides exactly-once semantics. This is accomplished by using temporary "staging" tables. Data is first inserted into the staging tables. If something goes wrong with this insert, the staging table can be truncated and the insert can be retried from a clean state. Only when an insert is completed and successful, the partitions in the staging table are moved to target table. To read more about this strategy, check-out [this blog post](https://clickhouse.com/blog/supercharge-your-clickhouse-data-loads-part3).
7676

77-
[//]: # "TODO Verify archive table prefix for ABS"
77+
### Virtual columns {#virtual-columns}
78+
79+
To track which files have been ingested, include the `_file` virtual column to the column mapping list. The `_file` virtual column contains the filename of the source object, which can be used to query which files have been processed.
7880

7981
## Access control {#access-control}
8082

docs/integrations/data-ingestion/clickpipes/object-storage/google-cloud-storage/01_overview.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -76,11 +76,9 @@ Object Storage ClickPipes follow the POSIX standard for file pattern matching. A
7676

7777
Various types of failures can occur when ingesting large dataset, which can result in a partial inserts or duplicate data. Object Storage ClickPipes are resilient to insert failures and provides exactly-once semantics. This is accomplished by using temporary "staging" tables. Data is first inserted into the staging tables. If something goes wrong with this insert, the staging table can be truncated and the insert can be retried from a clean state. Only when an insert is completed and successful, the partitions in the staging table are moved to target table. To read more about this strategy, check-out [this blog post](https://clickhouse.com/blog/supercharge-your-clickhouse-data-loads-part3).
7878

79-
[//]: # "TODO Verify archive table prefix for GCS"
80-
8179
### Virtual columns {#virtual-columns}
8280

83-
To track which files have been ingested, incaddlude the `_file` virtual column to the column mapping list. The `_file` virtual column contains the filename of the source object, which can be used to query which files have been processed.
81+
To track which files have been ingested, include the `_file` virtual column to the column mapping list. The `_file` virtual column contains the filename of the source object, which can be used to query which files have been processed.
8482

8583
## Access control {#access-control}
8684

0 commit comments

Comments
 (0)