Skip to content

Commit 02ede43

Browse files
committed
Fix several links to use native references
1 parent 94b5df3 commit 02ede43

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

docs/integrations/data-ingestion/etl-tools/dbt/features-and-configurations.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -126,8 +126,8 @@ without `on cluster` clause for this model.
126126
#### Read-after-write Consistency {#read-after-write-consistency}
127127

128128
dbt relies on a read-after-insert consistency model. This is not compatible with ClickHouse clusters that have more than one replica if you cannot guarantee that all operations will go to the same replica. You may not encounter problems in your day-to-day usage of dbt, but there are some strategies depending on your cluster to have this guarantee in place:
129-
- If you are using a ClickHouse Cloud cluster, you only need to set `select_sequential_consistency: 1` in your profile's `custom_settings` property. You can find more information about this setting [here](https://clickhouse.com/docs/operations/settings/settings#select_sequential_consistency).
130-
- If you are using a self-hosted cluster, make sure all dbt requests are sent to the same ClickHouse replica. If you have a load balancer on top of it, try using some `replica aware routing`/`sticky sessions` mechanism to be able to always reach the same replica. Adding the setting `select_sequential_consistency = 1` in clusters outside ClickHouse Cloud is [not recommended](https://clickhouse.com/docs/operations/settings/settings#select_sequential_consistency).
129+
- If you are using a ClickHouse Cloud cluster, you only need to set `select_sequential_consistency: 1` in your profile's `custom_settings` property. You can find more information about this setting [here](/operations/settings/settings#select_sequential_consistency).
130+
- If you are using a self-hosted cluster, make sure all dbt requests are sent to the same ClickHouse replica. If you have a load balancer on top of it, try using some `replica aware routing`/`sticky sessions` mechanism to be able to always reach the same replica. Adding the setting `select_sequential_consistency = 1` in clusters outside ClickHouse Cloud is [not recommended](/operations/settings/settings#select_sequential_consistency).
131131

132132
## General information about features {#general-information-about-features}
133133

@@ -269,7 +269,7 @@ group by event_type
269269

270270
### Materialization: view {#materialization-view}
271271

272-
A dbt model can be created as a [ClickHouse view](https://clickhouse.com/docs/en/sql-reference/table-functions/view/)
272+
A dbt model can be created as a [ClickHouse view](/sql-reference/table-functions/view/)
273273
and configured using the following syntax:
274274

275275
Project File (`dbt_project.yml`):
@@ -286,7 +286,7 @@ Or config block (`models/<model_name>.sql`):
286286

287287
### Materialization: table {#materialization-table}
288288

289-
A dbt model can be created as a [ClickHouse table](https://clickhouse.com/docs/en/operations/system-tables/tables/) and
289+
A dbt model can be created as a [ClickHouse table](/operations/system-tables/tables/) and
290290
configured using the following syntax:
291291

292292
Project File (`dbt_project.yml`):
@@ -500,7 +500,7 @@ If you prefer not to preload historical data during MV creation, you can disable
500500

501501
#### Refreshable Materialized Views {#refreshable-materialized-views}
502502

503-
To use [Refreshable Materialized View](https://clickhouse.com/docs/en/materialized-view/refreshable-materialized-view),
503+
To use [Refreshable Materialized View](/materialized-view/refreshable-materialized-view),
504504
please adjust the following configs as needed in your MV model (all these configs are supposed to be set inside a
505505
refreshable config object):
506506

@@ -711,7 +711,7 @@ keys used to populate the parameters of the S3 table function:
711711
| structure | The column structure of the data in bucket, as a list of name/datatype pairs, such as `['id UInt32', 'date DateTime', 'value String']` If not provided ClickHouse will infer the structure. |
712712
| aws_access_key_id | The S3 access key id. |
713713
| aws_secret_access_key | The S3 secret key. |
714-
| role_arn | The ARN of a ClickhouseAccess IAM role to use to securely access the S3 objects. See this [documentation](https://clickhouse.com/docs/en/cloud/security/secure-s3) for more information. |
714+
| role_arn | The ARN of a ClickhouseAccess IAM role to use to securely access the S3 objects. See this [documentation](/cloud/data-sources/secure-s3) for more information. |
715715
| compression | The compression method used with the S3 objects. If not provided ClickHouse will attempt to determine compression based on the file name. |
716716

717717
See
@@ -738,9 +738,9 @@ dbt Core v1.10 introduced catalog integration support, which allows adapters to
738738

739739
ClickHouse recently added native support for Apache Iceberg tables and data catalogs. Most of the features are still `experimental`, but you can already use them if you use a recent ClickHouse version.
740740

741-
* You can use ClickHouse to **query Iceberg tables stored in object storage** (S3, Azure Blob Storage, Google Cloud Storage) using the [Iceberg table engine](https://clickhouse.com/docs/en/engines/table-engines/integrations/iceberg) and [iceberg table function](https://clickhouse.com/docs/en/sql-reference/table-functions/iceberg).
741+
* You can use ClickHouse to **query Iceberg tables stored in object storage** (S3, Azure Blob Storage, Google Cloud Storage) using the [Iceberg table engine](/engines/table-engines/integrations/iceberg) and [iceberg table function](/sql-reference/table-functions/iceberg).
742742

743-
* Additionally, ClickHouse provides the [DataLakeCatalog database engine](https://clickhouse.com/docs/engines/database-engines/datalakecatalog), which enables **connection to external data catalogs** including AWS Glue Catalog, Databricks Unity Catalog, Hive Metastore, and REST Catalogs. This allows you to query open table format data (Iceberg, Delta Lake) directly from external catalogs without data duplication.
743+
* Additionally, ClickHouse provides the [DataLakeCatalog database engine](/engines/database-engines/datalakecatalog), which enables **connection to external data catalogs** including AWS Glue Catalog, Databricks Unity Catalog, Hive Metastore, and REST Catalogs. This allows you to query open table format data (Iceberg, Delta Lake) directly from external catalogs without data duplication.
744744

745745
### Workarounds for Working with Iceberg and Catalogs {#workarounds-iceberg-catalogs}
746746

0 commit comments

Comments
 (0)