You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
dbt relies on a read-after-insert consistency model. This is not compatible with ClickHouse clusters that have more than one replica if you cannot guarantee that all operations will go to the same replica. You may not encounter problems in your day-to-day usage of dbt, but there are some strategies depending on your cluster to have this guarantee in place:
129
-
- If you are using a ClickHouse Cloud cluster, you only need to set `select_sequential_consistency: 1` in your profile's `custom_settings` property. You can find more information about this setting [here](https://clickhouse.com/docs/operations/settings/settings#select_sequential_consistency).
130
-
- If you are using a self-hosted cluster, make sure all dbt requests are sent to the same ClickHouse replica. If you have a load balancer on top of it, try using some `replica aware routing`/`sticky sessions` mechanism to be able to always reach the same replica. Adding the setting `select_sequential_consistency = 1` in clusters outside ClickHouse Cloud is [not recommended](https://clickhouse.com/docs/operations/settings/settings#select_sequential_consistency).
129
+
- If you are using a ClickHouse Cloud cluster, you only need to set `select_sequential_consistency: 1` in your profile's `custom_settings` property. You can find more information about this setting [here](/operations/settings/settings#select_sequential_consistency).
130
+
- If you are using a self-hosted cluster, make sure all dbt requests are sent to the same ClickHouse replica. If you have a load balancer on top of it, try using some `replica aware routing`/`sticky sessions` mechanism to be able to always reach the same replica. Adding the setting `select_sequential_consistency = 1` in clusters outside ClickHouse Cloud is [not recommended](/operations/settings/settings#select_sequential_consistency).
131
131
132
132
## General information about features {#general-information-about-features}
133
133
@@ -269,7 +269,7 @@ group by event_type
269
269
270
270
### Materialization: view {#materialization-view}
271
271
272
-
A dbt model can be created as a [ClickHouse view](https://clickhouse.com/docs/en/sql-reference/table-functions/view/)
272
+
A dbt model can be created as a [ClickHouse view](/sql-reference/table-functions/view/)
273
273
and configured using the following syntax:
274
274
275
275
Project File (`dbt_project.yml`):
@@ -286,7 +286,7 @@ Or config block (`models/<model_name>.sql`):
To use [Refreshable Materialized View](https://clickhouse.com/docs/en/materialized-view/refreshable-materialized-view),
503
+
To use [Refreshable Materialized View](/materialized-view/refreshable-materialized-view),
504
504
please adjust the following configs as needed in your MV model (all these configs are supposed to be set inside a
505
505
refreshable config object):
506
506
@@ -711,7 +711,7 @@ keys used to populate the parameters of the S3 table function:
711
711
| structure | The column structure of the data in bucket, as a list of name/datatype pairs, such as `['id UInt32', 'date DateTime', 'value String']` If not provided ClickHouse will infer the structure. |
712
712
| aws_access_key_id | The S3 access key id. |
713
713
| aws_secret_access_key | The S3 secret key. |
714
-
| role_arn | The ARN of a ClickhouseAccess IAM role to use to securely access the S3 objects. See this [documentation](https://clickhouse.com/docs/en/cloud/security/secure-s3) for more information. |
714
+
| role_arn | The ARN of a ClickhouseAccess IAM role to use to securely access the S3 objects. See this [documentation](/cloud/data-sources/secure-s3) for more information. |
715
715
| compression | The compression method used with the S3 objects. If not provided ClickHouse will attempt to determine compression based on the file name. |
716
716
717
717
See
@@ -738,9 +738,9 @@ dbt Core v1.10 introduced catalog integration support, which allows adapters to
738
738
739
739
ClickHouse recently added native support for Apache Iceberg tables and data catalogs. Most of the features are still `experimental`, but you can already use them if you use a recent ClickHouse version.
740
740
741
-
* You can use ClickHouse to **query Iceberg tables stored in object storage** (S3, Azure Blob Storage, Google Cloud Storage) using the [Iceberg table engine](https://clickhouse.com/docs/en/engines/table-engines/integrations/iceberg) and [iceberg table function](https://clickhouse.com/docs/en/sql-reference/table-functions/iceberg).
741
+
* You can use ClickHouse to **query Iceberg tables stored in object storage** (S3, Azure Blob Storage, Google Cloud Storage) using the [Iceberg table engine](/engines/table-engines/integrations/iceberg) and [iceberg table function](/sql-reference/table-functions/iceberg).
742
742
743
-
* Additionally, ClickHouse provides the [DataLakeCatalog database engine](https://clickhouse.com/docs/engines/database-engines/datalakecatalog), which enables **connection to external data catalogs** including AWS Glue Catalog, Databricks Unity Catalog, Hive Metastore, and REST Catalogs. This allows you to query open table format data (Iceberg, Delta Lake) directly from external catalogs without data duplication.
743
+
* Additionally, ClickHouse provides the [DataLakeCatalog database engine](/engines/database-engines/datalakecatalog), which enables **connection to external data catalogs** including AWS Glue Catalog, Databricks Unity Catalog, Hive Metastore, and REST Catalogs. This allows you to query open table format data (Iceberg, Delta Lake) directly from external catalogs without data duplication.
744
744
745
745
### Workarounds for Working with Iceberg and Catalogs {#workarounds-iceberg-catalogs}
0 commit comments