Skip to content

Commit 4be4831

Browse files
committed
fix more headings
1 parent 040e94f commit 4be4831

File tree

11 files changed

+55
-15
lines changed

11 files changed

+55
-15
lines changed

docs/guides/best-practices/skipping-indexes.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -115,15 +115,19 @@ example, the debug log shows that the skip index dropped all but two granules:
115115
```
116116
## Skip index types {#skip-index-types}
117117

118+
<!-- vale off -->
118119
### minmax {#minmax}
120+
<!-- vale on -->
119121

120122
This lightweight index type requires no parameters. It stores the minimum and maximum values of the index expression
121123
for each block (if the expression is a tuple, it separately stores the values for each member of the element
122124
of the tuple). This type is ideal for columns that tend to be loosely sorted by value. This index type is usually the least expensive to apply during query processing.
123125

124126
This type of index only works correctly with a scalar or tuple expression -- the index will never be applied to expressions that return an array or map data type.
125127

128+
<!-- vale off -->
126129
### set {#set}
130+
<!-- vale on -->
127131

128132
This lightweight index type accepts a single parameter of the max_size of the value set per block (0 permits
129133
an unlimited number of discrete values). This set contains all values in the block (or is empty if the number of values exceeds the max_size). This index type works well with columns with low cardinality within each set of granules (essentially, "clumped together") but higher cardinality overall.

docs/guides/sre/configuring-ssl.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ import SelfManaged from '@site/docs/_snippets/_self_managed_only_automated.md';
1010
import configuringSsl01 from '@site/static/images/guides/sre/configuring-ssl_01.png';
1111
import Image from '@theme/IdealImage';
1212

13-
# Configuring SSL-TLS
13+
# Configuring ssl-tls
1414

1515
<SelfManaged />
1616

docs/guides/sre/keeper/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -456,7 +456,7 @@ The following features are available:
456456
- `create_if_not_exists` - support for `CreateIfNotExists` request, which will try to create a node if it doesn't exist. If it exists, no changes are applied and `ZOK` is returned. Default: `0`
457457
- `remove_recursive` - support for `RemoveRecursive` request, which removes the node along with its subtree. Default: `0`
458458

459-
### Migration from ZooKeeper {#migration-from-zookeeper}
459+
### Migration from zookeeper {#migration-from-zookeeper}
460460

461461
Seamless migration from ZooKeeper to ClickHouse Keeper is not possible. You have to stop your ZooKeeper cluster, convert data, and start ClickHouse Keeper. `clickhouse-keeper-converter` tool allows converting ZooKeeper logs and snapshots to ClickHouse Keeper snapshot. It works only with ZooKeeper > 3.4. Steps for migration:
462462

docs/guides/sre/user-management/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -201,7 +201,7 @@ This article shows the basics of defining SQL users and roles and applying those
201201
GRANT ALL ON *.* TO clickhouse_admin WITH GRANT OPTION;
202202
```
203203

204-
## ALTER permissions {#alter-permissions}
204+
## Alter permissions {#alter-permissions}
205205

206206
This article is intended to provide you with a better understanding of how to define permissions, and how permissions work when using `ALTER` statements for privileged users.
207207

docs/managing-data/core-concepts/partitions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ import Image from '@theme/IdealImage';
1818

1919
Partitions group the [data parts](/parts) of a table in the [MergeTree engine family](/engines/table-engines/mergetree-family) into organized, logical units, which is a way of organizing data that is conceptually meaningful and aligned with specific criteria, such as time ranges, categories, or other key attributes. These logical units make data easier to manage, query, and optimize.
2020

21-
### Partition By {#partition-by}
21+
### PARTITION BY {#partition-by}
2222

2323
Partitioning can be enabled when a table is initially defined via the [PARTITION BY clause](/engines/table-engines/mergetree-family/custom-partitioning-key). This clause can contain a SQL expression on any columns, the results of which will define which partition a row belongs to.
2424

docs/managing-data/deleting-data/overview.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ There are several ways to delete data in ClickHouse, each with its own advantage
1616

1717
Here is a summary of the different ways to delete data in ClickHouse:
1818

19-
## Lightweight Deletes {#lightweight-deletes}
19+
## Lightweight deletes {#lightweight-deletes}
2020

2121
Lightweight deletes cause rows to be immediately marked as deleted such that they can be automatically filtered out of all subsequent `SELECT` queries. Subsequent removal of these deleted rows occurs during natural merge cycles and thus incurs less I/O. As a result, it is possible that for an unspecified period, data is not actually deleted from storage and is only marked as deleted. If you need to guarantee that data is deleted, consider the above mutation command.
2222

@@ -33,7 +33,7 @@ In general, lightweight deletes should be preferred over mutations if the existe
3333

3434
Read more about [lightweight deletes](/guides/developer/lightweight-delete).
3535

36-
## Delete Mutations {#delete-mutations}
36+
## Delete mutations {#delete-mutations}
3737

3838
Delete mutations can be issued through a `ALTER TABLE ... DELETE` command e.g.
3939

@@ -46,7 +46,7 @@ These can be executed either synchronously (by default if non-replicated) or asy
4646

4747
Read more about [delete mutations](/sql-reference/statements/alter/delete).
4848

49-
## Truncate Table {#truncate-table}
49+
## Truncate table {#truncate-table}
5050

5151
If all data in a table needs to be deleted, use the `TRUNCATE TABLE` command shown below. This is a lightweight operation.
5252

@@ -56,7 +56,7 @@ TRUNCATE TABLE posts
5656

5757
Read more about [TRUNCATE TABLE](/sql-reference/statements/truncate).
5858

59-
## Drop Partition {#drop-partition}
59+
## Drop partition {#drop-partition}
6060

6161
If you have specified a custom partitioning key for your data, partitions can be efficiently dropped. Avoid high cardinality partitioning.
6262

docs/migrations/bigquery/migrating-to-clickhouse-cloud.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ Before trying the following examples, we recommend users review the [permissions
6666

6767
Change Data Capture (CDC) is the process by which tables are kept in sync between two databases. This is significantly more complex if updates and deletes are to be handled in near real-time. One approach is to simply schedule a periodic export using BigQuery's [scheduled query functionality](https://cloud.google.com/bigquery/docs/scheduling-queries). Provided you can accept some delay in the data being inserted into ClickHouse, this approach is easy to implement and maintain. An example is given in [this blog post](https://clickhouse.com/blog/clickhouse-bigquery-migrating-data-for-realtime-queries#using-scheduled-queries).
6868

69-
## Designing Schemas {#designing-schemas}
69+
## Designing schemas {#designing-schemas}
7070

7171
The Stack Overflow dataset contains a number of related tables. We recommend focusing on migrating the primary table first. This may not necessarily be the largest table but rather the one on which you expect to receive the most analytical queries. This will allow you to familiarize yourself with the main ClickHouse concepts. This table may require remodeling as additional tables are added to fully exploit ClickHouse features and obtain optimal performance. We explore this modeling process in our [Data Modeling docs](/data-modeling/schema-design#next-data-modeling-techniques).
7272

@@ -499,7 +499,7 @@ MaxViewCount: 66975
499499
Peak memory usage: 377.26 MiB.
500500
```
501501

502-
## Conditionals and Arrays {#conditionals-and-arrays}
502+
## Conditionals and arrays {#conditionals-and-arrays}
503503

504504
Conditional and array functions make queries significantly simpler. The following query computes the tags (with more than 10000 occurrences) with the largest percentage increase from 2022 to 2023. Note how the following ClickHouse query is succinct thanks to conditionals, array functions, and the ability to reuse aliases in the `HAVING` and `SELECT` clauses.
505505

docs/migrations/postgres/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ When migrating from PostgreSQL to ClickHouse, the right strategy depends on your
2323

2424
Below section describes the two main strategies for migration: **Real-Time CDC** and **Manual Bulk Load + Periodic Updates**.
2525

26-
### Real-Time replication (CDC) {#real-time-replication-cdc}
26+
### Real-time replication (CDC) {#real-time-replication-cdc}
2727

2828
Change Data Capture (CDC) is the process by which tables are kept in sync between two databases. It is the most efficient approach for most migration from PostgreSQL, but yet more complex as it handles insert, updates and deletes from PostgreSQL to ClickHouse in near real-time. It is ideal for use cases where real-time analytics are important.
2929

docs/native-protocol/columns.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ title: 'Column types'
55
description: 'Column types for the native protocol'
66
---
77

8-
# Column Types
8+
# Column types
99

1010
See [Data Types](/sql-reference/data-types/) for general reference.
1111

@@ -77,7 +77,7 @@ Alias of `FixedString(16)`, UUID value represented as binary.
7777

7878
Alias of `Int8` or `Int16`, but each integer is mapped to some `String` value.
7979

80-
## Low Cardinality {#low-cardinality}
80+
## LowCardinality {#low-cardinality}
8181

8282
`LowCardinality(T)` consists of `Index T, Keys K`,
8383
where `K` is one of (UInt8, UInt16, UInt32, UInt64) depending on size of `Index`.

docs/native-protocol/hash.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,12 +7,13 @@ description: 'Native protocol hash'
77

88
# CityHash
99

10-
ClickHouse uses **one of previous** versions of [CityHash from Google](https://github.com/google/cityhash).
10+
ClickHouse uses **one of the previous** versions of [CityHash from Google](https://github.com/google/cityhash).
1111

1212
:::info
1313
CityHash has changed the algorithm after we have added it into ClickHouse.
1414

15-
CityHash documentation specifically notes that the user should not rely to specific hash values and should not save it anywhere or use it as sharding key.
15+
CityHash documentation specifically notes that the user should not rely on
16+
specific hash values and should not save it anywhere or use it as a sharding key.
1617

1718
But as we exposed this function to the user, we had to fix the version of CityHash (to 1.0.2). And now we guarantee that the behaviour of CityHash functions available in SQL will not change.
1819

0 commit comments

Comments
 (0)