Skip to content

Commit 62db487

Browse files
committed
fix more headers
1 parent db819c6 commit 62db487

File tree

20 files changed

+195
-37
lines changed

20 files changed

+195
-37
lines changed

contribute/style-guide.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -393,3 +393,22 @@ show_related_blogs: true
393393

394394
This will show it on the page, assuming there is a matching blog. If there is no
395395
match then it remains hidden.
396+
397+
## Vale
398+
399+
Vale is a command-line tool that brings code-like linting to prose.
400+
We have a number of rules set up to ensure that our documentation is
401+
consistent in style.
402+
403+
The style rules are located at `/styles/ClickHouse`, and largely based
404+
off of the Google styleset, with some ClickHouse specific adaptions.
405+
If you want to check only a specific rule locally, you
406+
can run:
407+
408+
```bash
409+
vale --filter='.Name == "ClickHouse.Headings"' docs/integrations
410+
```
411+
412+
This will run only the rule named `Headings` on
413+
the `docs/integrations` directory. Specifying a specific markdown
414+
file is also possible.

docs/integrations/data-ingestion/gcs/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ ClickHouse recognizes that GCS represents an attractive storage solution for use
2525

2626
To utilize a GCS bucket as a disk, we must first declare it within the ClickHouse configuration in a file under `conf.d`. An example of a GCS disk declaration is shown below. This configuration includes multiple sections to configure the GCS "disk", the cache, and the policy that is specified in DDL queries when tables are to be created on the GCS disk. Each of these are described below.
2727

28-
#### storage_configuration > disks > gcs {#storage_configuration--disks--gcs}
28+
#### Storage configuration > disks > gcs {#storage_configuration--disks--gcs}
2929

3030
This part of the configuration is shown in the highlighted section and specifies that:
3131
- Batch deletes are not to be performed. GCS does not currently support batch deletes, so the autodetect is disabled to suppress error messages.
@@ -61,7 +61,7 @@ This part of the configuration is shown in the highlighted section and specifies
6161
</storage_configuration>
6262
</clickhouse>
6363
```
64-
#### storage_configuration > disks > cache {#storage_configuration--disks--cache}
64+
#### Storage configuration > disks > cache {#storage_configuration--disks--cache}
6565

6666
The example configuration highlighted below enables a 10Gi memory cache for the disk `gcs`.
6767

@@ -98,7 +98,7 @@ The example configuration highlighted below enables a 10Gi memory cache for the
9898
</storage_configuration>
9999
</clickhouse>
100100
```
101-
#### storage_configuration > policies > gcs_main {#storage_configuration--policies--gcs_main}
101+
#### Storage configuration > policies > gcs_main {#storage_configuration--policies--gcs_main}
102102

103103
Storage configuration policies allow choosing where data is stored. The policy highlighted below allows data to be stored on the disk `gcs` by specifying the policy `gcs_main`. For example, `CREATE TABLE ... SETTINGS storage_policy='gcs_main'`.
104104

docs/integrations/data-ingestion/google-dataflow/templates.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ title: 'Google Dataflow Templates'
88

99
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
1010

11-
# Google Dataflow Templates
11+
# Google Dataflow templates
1212

1313
<ClickHouseSupportedBadge/>
1414

docs/integrations/data-ingestion/kafka/kafka-table-engine.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -484,7 +484,7 @@ Consider the following when looking to increase Kafka Engine table throughput pe
484484

485485
Any settings changes should be tested. We recommend monitoring Kafka consumer lags to ensure you are properly scaled.
486486

487-
#### Additional Settings {#additional-settings}
487+
#### Additional settings {#additional-settings}
488488

489489
Aside from the settings discussed above, the following may be of interest:
490490

docs/integrations/data-ingestion/s3/performance.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -187,7 +187,7 @@ In our example we only return a few rows. If measuring the performance of `SELEC
187187
When reading from queries, the initial query can often appear slower than if the same query is repeated. This can be attributed to both S3's own caching but also the [ClickHouse Schema Inference Cache](/operations/system-tables/schema_inference_cache). This stores the inferred schema for files and means the inference step can be skipped on subsequent accesses, thus reducing query time.
188188
:::
189189

190-
## Using Threads for Reads {#using-threads-for-reads}
190+
## Using threads for reads {#using-threads-for-reads}
191191

192192
Read performance on S3 will scale linearly with the number of cores, provided you are not limited by network bandwidth or local I/O. Increasing the number of threads also has memory overhead permutations that users should be aware of. The following can be modified to improve read throughput performance potentially:
193193

@@ -233,7 +233,7 @@ SETTINGS max_threads = 64
233233
Peak memory usage: 639.99 MiB.
234234
```
235235

236-
## Tuning Threads and Block Size for Inserts {#tuning-threads-and-block-size-for-inserts}
236+
## Tuning threads and block size for inserts {#tuning-threads-and-block-size-for-inserts}
237237

238238
To achieve maximum ingestion performance, you must choose (1) an insert block size and (2) an appropriate level of insert parallelism based on (3) the amount of available CPU cores and RAM available. In summary:
239239

@@ -270,7 +270,7 @@ FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow
270270

271271
As shown, tuning of these setting has improved insert performance by over `33%`. We leave this to the reader to see if they can improve single node performance further.
272272

273-
## Scaling with Resources and Nodes {#scaling-with-resources-and-nodes}
273+
## Scaling with resources and nodes {#scaling-with-resources-and-nodes}
274274

275275
Scaling with resources and nodes applies to both read and insert queries.
276276

docs/integrations/data-visualization/hashboard-and-clickhouse.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,6 @@ This guide will walk you through the steps to connect Hashboard with your ClickH
4949

5050
Your ClickHouse database is now be connected to Hashboard and you can proceed by building [Data Models](https://docs.hashboard.com/docs/data-modeling/add-data-model), [Explorations](https://docs.hashboard.com/docs/visualizing-data/explorations), [Metrics](https://docs.hashboard.com/docs/metrics), and [Dashboards](https://docs.hashboard.com/docs/dashboards). See the corresponding Hashboard documentation for more detail on these features.
5151

52-
## Learn More {#learn-more}
52+
## Learn more {#learn-more}
5353

5454
For more advanced features and troubleshooting, visit [Hashboard's documentation](https://docs.hashboard.com/).

docs/integrations/data-visualization/rocketbi-and-clickhouse.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,7 @@ Save & Add the Chart to Dashboard
137137

138138
<Image size="md" img={rocketbi_14} alt="Dashboard view showing the newly added pie chart with other controls" border />
139139

140-
#### Use Date Control in a Time-series Chart {#use-date-control-in-a-time-series-chart}
140+
#### Use date control in a time-series chart {#use-date-control-in-a-time-series-chart}
141141
Let Use a Stacked Column Chart
142142

143143
<Image size="md" img={rocketbi_15} alt="Stacked column chart creation interface with time-series data" border />

docs/integrations/data-visualization/tableau/tableau-online-and-clickhouse.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ After that, all that remains is to click "Publish As" in the top right corner, a
5656

5757
NB: if you want to use Tableau Online in combination with Tableau Desktop and share ClickHouse datasets between them, make sure you use Tableau Desktop with the default MySQL connector as well, following the setup guide that is displayed [here](https://www.tableau.com/support/drivers) if you select MySQL from the Data Source drop-down. If you have an M1 Mac, check [this troubleshooting thread](https://community.tableau.com/s/question/0D58b0000Ar6OhvCQE/unable-to-install-mysql-driver-for-m1-mac) for a driver installation workaround.
5858

59-
## Connecting Tableau Online to ClickHouse (Cloud or on-premise setup with SSL) {#connecting-tableau-online-to-clickhouse-cloud-or-on-premise-setup-with-ssl}
59+
## Connecting Tableau Online to ClickHouse (cloud or on-premise setup with SSL) {#connecting-tableau-online-to-clickhouse-cloud-or-on-premise-setup-with-ssl}
6060

6161
As it is not possible to provide the SSL certificates via the Tableau Online MySQL connection setup wizard,
6262
the only way is to use Tableau Desktop to set the connection up, and then export it to Tableau Online. This process is, however, pretty straightforward.

docs/integrations/language-clients/python/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -415,7 +415,7 @@ efficiently. This method takes the following parameters.
415415
| external_data | ExternalData | *None* | An ExternalData object containing file or binary data to use with the query. See [Advanced Queries (External Data)](#external-data) |
416416
| context | QueryContext | *None* | A reusable QueryContext object can be used to encapsulate the above method arguments. See [Advanced Queries (QueryContexts)](#querycontexts) |
417417

418-
#### The QueryResult Object {#the-queryresult-object}
418+
#### The QueryResult object {#the-queryresult-object}
419419

420420
The base `query` method returns a QueryResult object with the following public properties:
421421

docs/integrations/migration/clickhouse-to-cloud.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,7 @@ Modify the allow list and allow access from **Anywhere** temporarily. See the [I
198198

199199
- Verify the data in the destination service
200200

201-
#### Re-establish the IP Access List on the source {#re-establish-the-ip-access-list-on-the-source}
201+
#### Re-establish the IP access list on the source {#re-establish-the-ip-access-list-on-the-source}
202202

203203
If you exported the access list earlier, then you can re-import it using **Share**, otherwise re-add your entries to the access list.
204204

0 commit comments

Comments
 (0)