| layout | title | body_class | toc |
|---|---|---|---|
default |
Frequently Asked Questions |
faq |
false |
DuckLake provides a lightweight one-stop solution if you need a lakehouse, i.e., a data lake with a catalog.
You can use DuckLake for a “multiplayer DuckDB” setup with multiple DuckDB instances reading and writing the same dataset – a concurrency model not supported by vanilla DuckDB.
If you only use DuckDB for both your DuckLake entry point and your catalog database, you can still benefit from DuckLake: you can run [time travel queries]({% link docs/stable/duckdb/usage/time_travel.md %}), exploit [data partitioning]({% link docs/stable/duckdb/advanced_features/partitioning.md %}), and can store your data in multiple files instead of using a single (potentially very large) database file.
DuckLake is both a lakehouse format and an open table format. When comparing to other technologies, DuckLake is similar to Delta Lake with Unity Catalog and Iceberg with Lakekeeper or Polaris.
“DuckLake” can refer to a number of things:
- The DuckLake lakehouse format that uses a catalog database and a Parquet storage to store data.
- A DuckLake instance storing a dataset with the DuckLake lakehouse format.
- The [
ducklakeDuckDB extension]({% link docs/stable/duckdb/introduction.md %}), which supports reading/writing datasets using the DuckLake format.
You can download the [logo package]({% link images/logo/DuckLake_Logo-package.zip %}). You can also download individual logos:
- Dark mode, inline layout: [png]({% link images/logo/DuckLake-dark-inline.png %}), [svg]({% link images/logo/DuckLake-dark-inline.svg %})
- Dark mode, stacked layout: [png]({% link images/logo/DuckLake-dark-stacked.png %}), [svg]({% link images/logo/DuckLake-dark-stacked.svg %})
- Dark mode, logo only: [png]({% link images/logo/DuckLake-dark-icon.png %}), [svg]({% link images/logo/DuckLake-dark-icon.svg %})
- Light mode, inline layout: [png]({% link images/logo/DuckLake-light-inline.png %}), [svg]({% link images/logo/DuckLake-light-inline.svg %})
- Light mode, stacked layout: [png]({% link images/logo/DuckLake-light-stacked.png %}), [svg]({% link images/logo/DuckLake-light-stacked.svg %})
- Light mode, logo only: [png]({% link images/logo/DuckLake-light-icon.png %}), [svg]({% link images/logo/DuckLake-light-icon.svg %})
We have several [talks and podcast episodes on DuckLake]({% link media/index.html %}).
Additionally, consider visiting the awesome-ducklake repository maintained by community member Emil Sadek.
DuckLake needs a storage layer and a catalog database. Both components can be picked from a wide range of options. The storage system can a blob storage (object storage), a block storage or a file storage. For the catalog database, any SQL-compatible database works that supports ACID operations and primary keys.
DuckLake can store the data files (Parquet files) on the AWS S3 blob storage or compatible solutions such as Azure Blob Storage, Google Cloud Storage or Cloudflare R2. You can run the catalog database anywhere, e.g., in an AWS Aurora database.
While we tested DuckLake extensively, it is not yet production-ready as demonstrated by its version number {{ site.stable_ducklake_version }}. We expect DuckLake to mature by Q2 2026.
DuckLake piggybacks on the authentication of the metadata catalog database. For example, if your catalog database is PostgreSQL, you can use PostgreSQL's authentication and authorization methods to protect your DuckLake. This is particularly effective when enabling encryption of DuckLake files.
The “small files problem” is a well-known problem in data lake formats and occurs e.g. when data is inserted in small batches, yielding many small files with each storing only a small amount of data. DuckLake significantly mitigates this problem by storing the metadata in a database system (catalog database) and making the compaction step simple. DuckLake also harnesses the catalog database to stage data (a technique called “data inlining”) before serializing it into Parquet files. Further improvements are on the roadmap.
Yes, we published a DuckLake that contains the Dutch Railway Dataset. This DuckLake uses DuckDB as its catalog database and is served from object storage. To attach to it from a DuckDB instance, run:
ATTACH 'https://blobs.duckdb.org/datalake/nl-railway.ducklake' AS nl_railway
(TYPE ducklake);
USE nl_railway;
FROM services LIMIT 1;No. Similarly to other lakehouse technologies, DuckLake does not support constraints, keys, or indexes. For more information, see the [list of unsupported features]({% link docs/stable/duckdb/unsupported_features.md %}).
Yes. Starting with v0.3, you can copy from [DuckLake to Iceberg]({% post_url 2025-09-17-ducklake-03 %}#interoperability-with-iceberg).
The data files of DuckLake must be stored in Parquet. Using DuckDB files as storage are not supported at the moment.
No. The only limitation is the catalog database's performance but even with a relatively slow catalog database, you can have terabytes of data and millions of snapshots.
DuckLake receives extensive testing, including running the applicable subset of DuckDB's thorough test suite. That said, if you encounter any problems using DuckLake, please submit an issue in the DuckLake issue tracker.
If you encounter any problems using DuckLake, please submit an issue in the DuckLake issue tracker. If you have any suggestions or feature requests, please open a ticket in DuckLake's discussion forum. You are also welcome to implement support in other systems for DuckLake following the [specification]({% link docs/stable/specification/introduction.md %}).
The [DuckLake specification]({% link docs/stable/specification/introduction.md %}) and the [ducklake DuckDB extension]({% link docs/stable/duckdb/introduction.md %}) are released under the MIT license.
Yes, you can download the documentation as a single Markdown file and as a PDF.