You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# error "The only supported platforms are x86_64 and AArch64, PowerPC (work in progress), s390x (work in progress), loongarch64 (experimental) and RISC-V 64 (experimental)"
Copy file name to clipboardExpand all lines: docs/en/engines/table-engines/integrations/s3.md
+7-2Lines changed: 7 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -174,6 +174,10 @@ Received exception from server (version 23.4.1):
174
174
Code: 48. DB::Exception: Received from localhost:9000. DB::Exception: Reading from a partitioned S3 storage is not implemented yet. (NOT_IMPLEMENTED)
175
175
```
176
176
177
+
## Inserting Data {#inserting-data}
178
+
179
+
Note that rows can only be inserted into new files. There are no merge cycles or file split operations. Once a file is written, subsequent inserts will fail. To avoid this you can use `s3_truncate_on_insert` and `s3_create_new_file_on_insert` settings. See more details [here](/integrations/s3#inserting-data).
180
+
177
181
## Virtual columns {#virtual-columns}
178
182
179
183
-`_path` — Path to the file. Type: `LowCardinality(String)`.
@@ -340,12 +344,12 @@ FROM s3(
340
344
);
341
345
```
342
346
343
-
:::note
347
+
:::note
344
348
ClickHouse supports three archive formats:
345
349
ZIP
346
350
TAR
347
351
7Z
348
-
While ZIP and TAR archives can be accessed from any supported storage location, 7Z archives can only be read from the local filesystem where ClickHouse is installed.
352
+
While ZIP and TAR archives can be accessed from any supported storage location, 7Z archives can only be read from the local filesystem where ClickHouse is installed.
349
353
:::
350
354
351
355
@@ -367,3 +371,4 @@ For details on optimizing the performance of the s3 function see [our detailed g
-`codec` - [String](../../../sql-reference/data-types/string.md) containing a [compression codec](/sql-reference/statements/create/table#column-compression-codecs).
24
+
-`codec` - [String](../../../sql-reference/data-types/string.md) containing a [compression codec](/sql-reference/statements/create/table#column_compression_codec).
25
25
-`block_size_bytes` - Block size of compressed data. This is similar to setting both [`max_compress_block_size`](../../../operations/settings/merge-tree-settings.md#max_compress_block_size) and [`min_compress_block_size`](../../../operations/settings/merge-tree-settings.md#min_compress_block_size). The default value is 1 MiB (1048576 bytes).
26
26
27
27
Both parameters are optional.
@@ -64,7 +64,7 @@ Result:
64
64
```
65
65
66
66
:::note
67
-
The result above will differ based on the default compression codec of the server. See [Column Compression Codecs](/sql-reference/statements/create/table#column-compression-codecs).
67
+
The result above will differ based on the default compression codec of the server. See [Column Compression Codecs](/sql-reference/statements/create/table#column_compression_codec).
Copy file name to clipboardExpand all lines: docs/en/sql-reference/table-functions/s3.md
+9-4Lines changed: 9 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -254,7 +254,7 @@ LIMIT 5;
254
254
255
255
## Using S3 credentials (ClickHouse Cloud) {#using-s3-credentials-clickhouse-cloud}
256
256
257
-
For non-public buckets, users can pass an `aws_access_key_id` and `aws_secret_access_key` to the function. For example:
257
+
For non-public buckets, users can pass an `aws_access_key_id` and `aws_secret_access_key` to the function. For example:
258
258
259
259
```sql
260
260
SELECTcount() FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/mta/*.tsv', '<KEY>', '<SECRET>','TSVWithNames')
@@ -289,20 +289,23 @@ FROM s3(
289
289
);
290
290
```
291
291
292
-
:::note
292
+
:::note
293
293
ClickHouse supports three archive formats:
294
294
ZIP
295
295
TAR
296
296
7Z
297
-
While ZIP and TAR archives can be accessed from any supported storage location, 7Z archives can only be read from the local filesystem where ClickHouse is installed.
297
+
While ZIP and TAR archives can be accessed from any supported storage location, 7Z archives can only be read from the local filesystem where ClickHouse is installed.
298
298
:::
299
299
300
+
## Inserting Data {#inserting-data}
301
+
302
+
Note that rows can only be inserted into new files. There are no merge cycles or file split operations. Once a file is written, subsequent inserts will fail. See more details [here](/integrations/s3#inserting-data).
300
303
301
304
## Virtual Columns {#virtual-columns}
302
305
303
306
-`_path` — Path to the file. Type: `LowCardinality(String)`. In case of archive, shows path in a format: `"{path_to_archive}::{path_to_file_inside_archive}"`
304
307
-`_file` — Name of the file. Type: `LowCardinality(String)`. In case of archive shows name of the file inside the archive.
305
-
-`_size` — Size of the file in bytes. Type: `Nullable(UInt64)`. If the file size is unknown, the value is `NULL`. In case of archive shows uncompressed file size of the file inside the archive.
308
+
-`_size` — Size of the file in bytes. Type: `Nullable(UInt64)`. If the file size is unknown, the value is `NULL`. In case of archive shows uncompressed file size of the file inside the archive.
306
309
-`_time` — Last modified time of the file. Type: `Nullable(DateTime)`. If the time is unknown, the value is `NULL`.
0 commit comments