Skip to content

Commit 2e9e688

Browse files
claude[bot]rubenfiszelhcourdent
authored
docs: add DuckDB workspace storage s3:// notation documentation (#1029)
Add documentation for s3:// notation as an alternative to passing s3 objects as parameters in DuckDB scripts: - Primary workspace: s3:///path/to/file - Secondary storage: s3://<secondary_storage>/path/to/file - Glob pattern support: s3:///myfiles/*.parquet Resolves #1028 Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com> Co-authored-by: Ruben Fiszel <[email protected]> Co-authored-by: Henri Courdent <[email protected]>
1 parent 4ac79ad commit 2e9e688

File tree

1 file changed

+17
-0
lines changed
  • docs/getting_started/0_scripts_quickstart/5_sql_quickstart

1 file changed

+17
-0
lines changed

docs/getting_started/0_scripts_quickstart/5_sql_quickstart/index.mdx

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -538,6 +538,23 @@ You can then query this file using the standard read_csv/read_parquet/read_json
538538
SELECT * FROM read_parquet($file)
539539
```
540540

541+
Alternatively, you can reference files on the workspace directly using s3:// notation. This is equivalent to passing an s3 object as a parameter:
542+
543+
For primary workspace storage:
544+
```sql
545+
SELECT * FROM read_parquet('s3:///path/to/file.parquet')
546+
```
547+
548+
For secondary storage:
549+
```sql
550+
SELECT * FROM read_parquet('s3://<secondary_storage>/path/to/file.parquet')
551+
```
552+
553+
This notation also works with glob patterns:
554+
```sql
555+
SELECT * FROM read_parquet('s3:///myfiles/*.parquet')
556+
```
557+
541558
You can also attach to other database resources (BigQuery, PostgreSQL and MySQL). We use the official and community DuckDB extensions under the hood :
542559
```sql
543560
ATTACH '$res:u/demo/amazed_postgresql' AS db (TYPE postgres);

0 commit comments

Comments
 (0)