Skip to content

Commit 8a1bb27

Browse files
authored
Merge pull request #3394 from Blargian/fix_broken_anchors_1
Fix broken anchors
2 parents 59a6369 + ad0c003 commit 8a1bb27

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

48 files changed

+119
-122
lines changed

docs/_snippets/_system_table_cloud.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
:::note Querying in ClickHouse Cloud
2-
The data in this system table is held locally on each node in ClickHouse Cloud. Obtaining a complete view of all data, therefore, requires the `clusterAllReplicas` function. See [here](/operations/system-tables#system-tables-in-clickhouse-cloud) for further details.
2+
The data in this system table is held locally on each node in ClickHouse Cloud. Obtaining a complete view of all data, therefore, requires the `clusterAllReplicas` function. See [here](/operations/system-tables/overview#system-tables-in-clickhouse-cloud) for further details.
33
:::

docs/chdb/guides/querying-parquet.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ But first, let's install `chDB`:
4848
import chdb
4949
```
5050

51-
When querying Parquet files, we can use the [`ParquetMetadata`](/interfaces/formats#parquetmetadata-data-format-parquet-metadata) input format to have it return Parquet metadata rather than the content of the file.
51+
When querying Parquet files, we can use the [`ParquetMetadata`](/interfaces/formats/ParquetMetadata) input format to have it return Parquet metadata rather than the content of the file.
5252
Let's use the `DESCRIBE` clause to see the fields returned when we use this format:
5353

5454
```python

docs/cloud/manage/cloud-tiers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -186,7 +186,7 @@ Caters to large-scale, mission critical deployments that have stringent security
186186
- Single Sign On (SSO)
187187
- Enhanced Encryption: For AWS and GCP services. Services are encrypted by our key by default and can be rotated to their key to enable Customer Managed Encryption Keys (CMEK).
188188
- Allows Scheduled upgrades: Users can select the day of the week/time window for upgrades, both database and cloud releases.
189-
- Offers [HIPAA](../security/compliance-overview.md/#hipaa) Compliance.
189+
- Offers [HIPAA](../security/compliance-overview.md/#hipaa-since-2024) Compliance.
190190
- Exports Backups to the user's account.
191191

192192
:::note

docs/cloud/manage/openapi.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ This document covers the ClickHouse Cloud API. For database API endpoints, pleas
2929
3. To create an API key, specify the key name, permissions for the key, and expiration time, then click `Generate API Key`.
3030
<br/>
3131
:::note
32-
Permissions align with ClickHouse Cloud [predefined roles](/cloud/security/cloud-access-management#predefined-roles). The developer role has read-only permissions and the admin role has full read and write permissions.
32+
Permissions align with ClickHouse Cloud [predefined roles](/cloud/security/cloud-access-management/overview#predefined-roles). The developer role has read-only permissions and the admin role has full read and write permissions.
3333
:::
3434

3535
<img src={image_03} width="100%"/>

docs/cloud/reference/changelog.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ We are introducing a new vertical scaling mechanism for compute replicas, which
110110

111111
### Horizontal scaling (GA) {#horizontal-scaling-ga}
112112

113-
Horizontal scaling is now Generally Available. Users can add additional replicas to scale out their service through the APIs and the cloud console. Please refer to the [documentation](/manage/scaling#self-serve-horizontal-scaling) for information.
113+
Horizontal scaling is now Generally Available. Users can add additional replicas to scale out their service through the APIs and the cloud console. Please refer to the [documentation](/manage/scaling#manual-horizontal-scaling) for information.
114114

115115
### Configurable backups {#configurable-backups}
116116

@@ -446,7 +446,7 @@ The Fast release channel allows your services to receive updates ahead of the re
446446

447447
### Terraform support for horizontal scaling {#terraform-support-for-horizontal-scaling}
448448

449-
ClickHouse Cloud supports [horizontal scaling](/manage/scaling#vertical-and-horizontal-scaling), or the ability to add additional replicas of the same size to your services. Horizontal scaling improves performance and parallelization to support concurrent queries. Previously, adding more replicas required either using the ClickHouse Cloud console or the API. You can now use Terraform to add or remove replicas from your service, allowing you to programmatically scale your ClickHouse services as needed.
449+
ClickHouse Cloud supports [horizontal scaling](/manage/scaling#how-scaling-works-in-clickhouse-cloud), or the ability to add additional replicas of the same size to your services. Horizontal scaling improves performance and parallelization to support concurrent queries. Previously, adding more replicas required either using the ClickHouse Cloud console or the API. You can now use Terraform to add or remove replicas from your service, allowing you to programmatically scale your ClickHouse services as needed.
450450

451451
Please see the [ClickHouse Terraform provider](https://registry.terraform.io/providers/ClickHouse/clickhouse/latest/docs) for more information.
452452

docs/cloud/security/shared-responsibility-model.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,6 @@ title: Security Shared Responsibility Model
88

99
ClickHouse Cloud offers three service types: Basic, Scale and Enterprise. For more information, review our [Service Types](/cloud/manage/cloud-tiers) page.
1010

11-
1211
## Cloud architecture {#cloud-architecture}
1312

1413
The Cloud architecture consists of the control plane and the data plane. The control plane is responsible for organization creation, user management within the control plane, service management, API key management, and billing. The data plane runs tooling for orchestration and management, and houses customer services. For more information, review our [ClickHouse Cloud Architecture](/cloud/reference/architecture) diagram.
@@ -58,9 +57,9 @@ The model below generally addresses ClickHouse responsibilities and shows respon
5857
| Setting | Status | Cloud | Service level |
5958
|------------------------------------------------------------------------------------------------------|-----------|-------------------|-------------------------|
6059
| [Standard role-based access](/cloud/security/cloud-access-management) in control plane | Available | AWS, GCP, Azure | All |
61-
| [Multi-factor authentication (MFA)](/cloud/security/cloud-authentication#multi-factor-authhentication) available | Available | AWS, GCP, Azure | All |
60+
| [Multi-factor authentication (MFA)](/cloud/security/cloud-authentication#multi-factor-authentication) available | Available | AWS, GCP, Azure | All |
6261
| [SAML Single Sign-On](/cloud/security/saml-setup) to control plane available | Preview | AWS, GCP, Azure | Enterprise |
63-
| Granular [role-based access control](/cloud/security/cloud-access-management#database-roles) in databases | Available | AWS, GCP, Azure | All |
62+
| Granular [role-based access control](/cloud/security/cloud-access-management/overview#database-roles) in databases | Available | AWS, GCP, Azure | All |
6463

6564
</details>
6665
<details>
@@ -69,8 +68,8 @@ The model below generally addresses ClickHouse responsibilities and shows respon
6968
| Setting | Status | Cloud | Service level |
7069
|------------------------------------------------------------------------------------------------------|-----------|-------------------|-------------------------|
7170
| [Cloud provider and region](/cloud/reference/supported-regions) selections | Available | AWS, GCP, Azure | All |
72-
| Limited [free daily backups](/cloud/manage/backups#default-backup-policy) | Available | AWS, GCP, Azure | All |
73-
| [Custom backup configurations](/cloud/manage/backups#configurable-backups) available | Available | GCP, AWS, Azure | Scale or Enterprise |
71+
| Limited [free daily backups](/cloud/manage/backups/overview#default-backup-policy) | Available | AWS, GCP, Azure | All |
72+
| [Custom backup configurations](/cloud/manage/backups/overview#configurable-backups) available | Available | GCP, AWS, Azure | Scale or Enterprise |
7473
| [Customer managed encryption keys (CMEK)](/cloud/security/cmek) for transparent<br/> data encryption available | Available | AWS | Scale or Enterprise |
7574
| [Field level encryption](/sql-reference/functions/encryption-functions) with manual key management for granular encryption | Available | GCP, AWS, Azure | All |
7675

docs/data-compression/compression-modes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ ClickHouse protocol supports **data blocks** compression with checksums.
1414
Use `LZ4` if not sure what mode to pick.
1515

1616
:::tip
17-
Learn more about the [column compression codecs](/sql-reference/statements/create/table.md/#column-compression-codecs) available and specify them when creating your tables, or afterward.
17+
Learn more about the [column compression codecs](/sql-reference/statements/create/table#column_compression_codec) available and specify them when creating your tables, or afterward.
1818
:::
1919

2020
## Modes {#modes}

docs/faq/operations/delete-old-data.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,13 +39,13 @@ ALTER DELETE removes rows using asynchronous batch operations. Unlike DELETE FRO
3939

4040
This is the most common approach to make your system based on ClickHouse [GDPR](https://gdpr-info.eu)-compliant.
4141

42-
More details on [mutations](../../sql-reference/statements/alter/index.md#alter-mutations).
42+
More details on [mutations](/sql-reference/statements/alter#mutations).
4343

4444
## DROP PARTITION {#drop-partition}
4545

4646
`ALTER TABLE ... DROP PARTITION` provides a cost-efficient way to drop a whole partition. It’s not that flexible and needs proper partitioning scheme configured on table creation, but still covers most common cases. Like mutations need to be executed from an external system for regular use.
4747

48-
More details on [manipulating partitions](../../sql-reference/statements/alter/partition.md#alter_drop-partition).
48+
More details on [manipulating partitions](/sql-reference/statements/alter/partition).
4949

5050
## TRUNCATE {#truncate}
5151

docs/faq/use-cases/time-series.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ClickHouse is a generic data storage solution for [OLAP](../../faq/general/olap.
1313

1414
First of all, there are **[specialized codecs](../../sql-reference/statements/create/table.md#specialized-codecs)** which make typical time-series. Either common algorithms like `DoubleDelta` and `Gorilla` or specific to ClickHouse like `T64`.
1515

16-
Second, time-series queries often hit only recent data, like one day or one week old. It makes sense to use servers that have both fast NVMe/SSD drives and high-capacity HDD drives. ClickHouse [TTL](/engines/table-engines/mergetree-family/mergetree.md/##table_engine-mergetree-multiple-volumes) feature allows to configure keeping fresh hot data on fast drives and gradually move it to slower drives as it ages. Rollup or removal of even older data is also possible if your requirements demand it.
16+
Second, time-series queries often hit only recent data, like one day or one week old. It makes sense to use servers that have both fast NVMe/SSD drives and high-capacity HDD drives. ClickHouse [TTL](/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-ttl) feature allows to configure keeping fresh hot data on fast drives and gradually move it to slower drives as it ages. Rollup or removal of even older data is also possible if your requirements demand it.
1717

1818
Even though it’s against ClickHouse philosophy of storing and processing raw data, you can use [materialized views](../../sql-reference/statements/create/view.md) to fit into even tighter latency or costs requirements.
1919

0 commit comments

Comments
 (0)