Skip to content

Commit 45f4a2d

Browse files
committed
Fix header casing
1 parent 4be4831 commit 45f4a2d

File tree

84 files changed

+358
-358
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

84 files changed

+358
-358
lines changed

docs/integrations/data-ingestion/apache-spark/spark-jdbc.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -347,7 +347,7 @@ reading in parallel from multiple workers.
347347
Please visit Apache Spark's official documentation for more information
348348
on [JDBC configurations](https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html#data-source-option).
349349

350-
## JDBC Limitations {#jdbc-limitations}
350+
## JDBC limitations {#jdbc-limitations}
351351

352352
* As of today, you can insert data using JDBC only into existing tables (currently there is no way to auto create the
353353
table on DF insertion, as Spark does with other connectors).

docs/integrations/data-ingestion/apache-spark/spark-native-connector.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ import Tabs from '@theme/Tabs';
1111
import TabItem from '@theme/TabItem';
1212
import TOCInline from '@theme/TOCInline';
1313

14-
# Spark Connector
14+
# Spark connector
1515

1616
This connector leverages ClickHouse-specific optimizations, such as advanced partitioning and predicate pushdown, to
1717
improve query performance and data handling.
@@ -35,7 +35,7 @@ catalog feature, it is now possible to add and work with multiple catalogs in a
3535
- Scala 2.12 or 2.13
3636
- Apache Spark 3.3 or 3.4 or 3.5
3737

38-
## Compatibility Matrix {#compatibility-matrix}
38+
## Compatibility matrix {#compatibility-matrix}
3939

4040
| Version | Compatible Spark Versions | ClickHouse JDBC version |
4141
|---------|---------------------------|-------------------------|
@@ -50,7 +50,7 @@ catalog feature, it is now possible to add and work with multiple catalogs in a
5050
| 0.2.1 | Spark 3.2 | Not depend on |
5151
| 0.1.2 | Spark 3.2 | Not depend on |
5252

53-
## Installation & Setup {#installation--setup}
53+
## Installation & setup {#installation--setup}
5454

5555
For integrating ClickHouse with Spark, there are multiple installation options to suit different project setups.
5656
You can add the ClickHouse Spark connector as a dependency directly in your project's build file (such as in `pom.xml`
@@ -146,7 +146,7 @@ for production.
146146
</TabItem>
147147
</Tabs>
148148

149-
### Download The Library {#download-the-library}
149+
### Download the library {#download-the-library}
150150

151151
The name pattern of the binary JAR is:
152152

@@ -172,7 +172,7 @@ In any case, ensure that the package versions are compatible according to
172172
the [Compatibility Matrix](#compatibility-matrix).
173173
:::
174174

175-
## Register The Catalog (required) {#register-the-catalog-required}
175+
## Register the catalog (required) {#register-the-catalog-required}
176176

177177
In order to access your ClickHouse tables, you must configure a new Spark catalog with the following configs:
178178

@@ -222,7 +222,7 @@ That way, you would be able to access clickhouse1 table `<ck_db>.<ck_table>` fro
222222

223223
:::
224224

225-
## ClickHouse Cloud Settings {#clickhouse-cloud-settings}
225+
## ClickHouse Cloud settings {#clickhouse-cloud-settings}
226226

227227
When connecting to [ClickHouse Cloud](https://clickhouse.com), make sure to enable SSL and set the appropriate SSL mode. For example:
228228

@@ -231,7 +231,7 @@ spark.sql.catalog.clickhouse.option.ssl true
231231
spark.sql.catalog.clickhouse.option.ssl_mode NONE
232232
```
233233

234-
## Read Data {#read-data}
234+
## Read data {#read-data}
235235

236236
<Tabs groupId="spark_apis">
237237
<TabItem value="Java" label="Java" default>
@@ -338,7 +338,7 @@ df.show()
338338
</TabItem>
339339
</Tabs>
340340

341-
## Write Data {#write-data}
341+
## Write data {#write-data}
342342

343343
<Tabs groupId="spark_apis">
344344
<TabItem value="Java" label="Java" default>
@@ -472,7 +472,7 @@ df.writeTo("clickhouse.default.example_table").append()
472472
</TabItem>
473473
</Tabs>
474474

475-
## DDL Operations {#ddl-operations}
475+
## DDL operations {#ddl-operations}
476476

477477
You can perform DDL operations on your ClickHouse instance using Spark SQL, with all changes immediately persisted in
478478
ClickHouse.
@@ -530,7 +530,7 @@ The following are the adjustable configurations available in the connector:
530530
| spark.clickhouse.write.retryInterval | 10s | The interval in seconds between write retry. | 0.1.0 |
531531
| spark.clickhouse.write.retryableErrorCodes | 241 | The retryable error codes returned by ClickHouse server when write failing. | 0.1.0 |
532532

533-
## Supported Data Types {#supported-data-types}
533+
## Supported data types {#supported-data-types}
534534

535535
This section outlines the mapping of data types between Spark and ClickHouse. The tables below provide quick references
536536
for converting data types when reading from ClickHouse into Spark and when inserting data from Spark into ClickHouse.
@@ -596,7 +596,7 @@ for converting data types when reading from ClickHouse into Spark and when inser
596596
| `Object` | || | |
597597
| `Nested` | || | |
598598

599-
## Contributing and Support {#contributing-and-support}
599+
## Contributing and support {#contributing-and-support}
600600

601601
If you'd like to contribute to the project or report any issues, we welcome your input!
602602
Visit our [GitHub repository](https://github.com/ClickHouse/spark-clickhouse-connector) to open an issue, suggest

docs/integrations/data-ingestion/azure-data-factory/using_azureblobstorage.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ SELECT * FROM azureBlobStorage(
7878
This allows you to efficiently pull external data into ClickHouse without
7979
needing intermediate ETL steps.
8080

81-
## A simple example using the Environmental Sensors Dataset {#simple-example-using-the-environmental-sensors-dataset}
81+
## A simple example using the Environmental sensors dataset {#simple-example-using-the-environmental-sensors-dataset}
8282

8383
As an example we will download a single file from the Environmental Sensors
8484
Dataset.
@@ -152,7 +152,7 @@ inference from input data](https://clickhouse.com/docs/interfaces/schema-inferen
152152
Your sensors table is now populated with data from the `2019-06_bmp180.csv.zst`
153153
file stored in Azure Blob Storage.
154154

155-
## Additional Resources {#additional-resources}
155+
## Additional resources {#additional-resources}
156156

157157
This is just a basic introduction to using the azureBlobStorage function. For
158158
more advanced options and configuration details, please refer to the official

docs/integrations/data-ingestion/azure-data-factory/using_http_interface.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ import adfCopyDataSource from '@site/static/images/integr
3434
import adfCopyDataSinkSelectPost from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-copy-data-sink-select-post.png';
3535
import adfCopyDataDebugSuccess from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-copy-data-debug-success.png';
3636

37-
# Using ClickHouse HTTP Interface in Azure Data Factory {#using-clickhouse-http-interface-in-azure-data-factory}
37+
# Using ClickHouse HTTP interface in Azure data factory {#using-clickhouse-http-interface-in-azure-data-factory}
3838

3939
The [`azureBlobStorage` Table Function](https://clickhouse.com/docs/sql-reference/table-functions/azureBlobStorage)
4040
is a fast and convenient way to ingest data from Azure Blob Storage into
@@ -118,7 +118,7 @@ Service to your ClickHouse instance, define a Dataset for the
118118
[REST sink](https://learn.microsoft.com/en-us/azure/data-factory/connector-rest),
119119
and create a Copy Data activity to send data from Azure to ClickHouse.
120120

121-
## Creating an Azure Data Factory instance {#create-an-azure-data-factory-instance}
121+
## Creating an Azure data factory instance {#create-an-azure-data-factory-instance}
122122

123123
This guide assumes that you have access to Microsoft Azure account, and you
124124
already have configured a subscription and a resource group. If you have
@@ -321,7 +321,7 @@ Now that we've configured both the input and output datasets, we can set up a
321321

322322
6. Once complete, click **Publish all** to save your pipeline and dataset changes.
323323

324-
## Additional Resources {#additional-resources-1}
324+
## Additional resources {#additional-resources-1}
325325
- [HTTP Interface](https://clickhouse.com/docs/interfaces/http)
326326
- [Copy and transform data from and to a REST endpoint by using Azure Data Factory](https://learn.microsoft.com/en-us/azure/data-factory/connector-rest?tabs=data-factory)
327327
- [Selecting an Insert Strategy](https://clickhouse.com/docs/best-practices/selecting-an-insert-strategy)

docs/integrations/data-ingestion/azure-synapse/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -72,15 +72,15 @@ Please visit the [ClickHouse Spark configurations page](/integrations/apache-spa
7272
When working with ClickHouse Cloud Please make sure to set the [required Spark settings](/integrations/apache-spark/spark-native-connector#clickhouse-cloud-settings).
7373
:::
7474

75-
## Setup Verification {#setup-verification}
75+
## Setup verification {#setup-verification}
7676

7777
To verify that the dependencies and configurations were set successfully, please visit your session's Spark UI, and go to your `Environment` tab.
7878
There, look for your ClickHouse related settings:
7979

8080
<Image img={sparkUICHSettings} size="xl" alt="Verifying ClickHouse settings using Spark UI" border/>
8181

8282

83-
## Additional Resources {#additional-resources}
83+
## Additional resources {#additional-resources}
8484

8585
- [ClickHouse Spark Connector Docs](/integrations/apache-spark)
8686
- [Azure Synapse Spark Pools Overview](https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-overview)

docs/integrations/data-ingestion/clickpipes/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ Steps:
8080
ClickPipes will create a table next to your destination table with the postfix `<destination_table_name>_clickpipes_error`. This table will contain any errors from the operations of your ClickPipe (network, connectivity, etc.) and also any data that don't conform to the schema. The error table has a [TTL](/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-ttl) of 7 days.
8181
If ClickPipes cannot connect to a data source or destination after 15min., ClickPipes instance stops and stores an appropriate message in the error table (providing the ClickHouse instance is available).
8282

83-
## F.A.Q {#faq}
83+
## FAQ {#faq}
8484
- **What is ClickPipes?**
8585

8686
ClickPipes is a ClickHouse Cloud feature that makes it easy for users to connect their ClickHouse services to external data sources, specifically Kafka. With ClickPipes for Kafka, users can easily continuously load data into ClickHouse, making it available for real-time analytics.

docs/integrations/data-ingestion/clickpipes/kafka.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -165,7 +165,7 @@ ClickPipes support the JSON type in the following circumstances:
165165
Note that you will have to manually change the destination column to the desired JSON type, including any fixed or skipped paths.
166166

167167
### Avro {#avro}
168-
#### Supported Avro Data Types {#supported-avro-data-types}
168+
#### Supported Avro data types {#supported-avro-data-types}
169169

170170
ClickPipes supports all Avro Primitive and Complex types, and all Avro Logical types except `time-millis`, `time-micros`, `local-timestamp-millis`, `local_timestamp-micros`, and `duration`. Avro `record` types are converted to Tuple, `array` types to Array, and `map` to Map (string keys only). In general the conversions listed [here](/interfaces/formats/Avro#data-types-matching) are available. We recommend using exact type matching for Avro numeric types, as ClickPipes does not check for overflow or precision loss on type conversion.
171171

@@ -210,7 +210,7 @@ view), it may improve ClickPipes performance to delete all the "non-virtual" col
210210

211211
## Best practices {#best-practices}
212212

213-
### Message Compression {#compression}
213+
### Message compression {#compression}
214214
We strongly recommend using compression for your Kafka topics. Compression can result in a significant saving in data transfer costs with virtually no performance hit.
215215
To learn more about message compression in Kafka, we recommend starting with this [guide](https://www.confluent.io/blog/apache-kafka-message-compression/).
216216

@@ -300,7 +300,7 @@ Role-based access only works for ClickHouse Cloud instances deployed to AWS.
300300
```
301301

302302

303-
### Custom Certificates {#custom-certificates}
303+
### Custom certificates {#custom-certificates}
304304
ClickPipes for Kafka supports the upload of custom certificates for Kafka brokers with SASL & public SSL/TLS certificate. You can upload your certificate in the SSL Certificate section of the ClickPipe setup.
305305
:::note
306306
Please note that while we support uploading a single SSL certificate along with SASL for Kafka, SSL with Mutual TLS (mTLS) is not supported at this time.
@@ -333,7 +333,7 @@ Regardless number of running consumers, fault tolerance is available by design.
333333
If a consumer or its underlying infrastructure fails,
334334
the ClickPipe will automatically restart the consumer and continue processing messages.
335335

336-
## F.A.Q {#faq}
336+
## FAQ {#faq}
337337

338338
### General {#general}
339339

docs/integrations/data-ingestion/clickpipes/kinesis.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -84,12 +84,12 @@ You have familiarized yourself with the [ClickPipes intro](./index.md) and setup
8484
10. **Congratulations!** you have successfully set up your first ClickPipe. If this is a streaming ClickPipe it will be continuously running, ingesting data in real-time from your remote data source. Otherwise it will ingest the batch and complete.
8585

8686

87-
## Supported Data Formats {#supported-data-formats}
87+
## Supported data formats {#supported-data-formats}
8888

8989
The supported formats are:
9090
- [JSON](../../../interfaces/formats.md/#json)
9191

92-
## Supported Data Types {#supported-data-types}
92+
## Supported data types {#supported-data-types}
9393

9494
### Standard types support {#standard-types-support}
9595
The following ClickHouse data types are currently supported in ClickPipes:
@@ -125,7 +125,7 @@ have to submit a support ticket to enable it on your service.
125125
JSON fields that are always a JSON object can be assigned to a JSON destination column. You will have to manually change the destination
126126
column to the desired JSON type, including any fixed or skipped paths.
127127

128-
## Kinesis Virtual Columns {#kinesis-virtual-columns}
128+
## Kinesis virtual columns {#kinesis-virtual-columns}
129129

130130
The following virtual columns are supported for Kinesis stream. When creating a new destination table virtual columns can be added by using the `Add Column` button.
131131

docs/integrations/data-ingestion/clickpipes/mysql/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ Make sure you are logged in to your ClickHouse Cloud account. If you don't have
7474

7575
<Image img={mysql_connection_details} alt="Fill in connection details" size="lg" border/>
7676

77-
#### (Optional) Set up SSH Tunneling {#optional-setting-up-ssh-tunneling}
77+
#### (Optional) Set up SSH tunneling {#optional-setting-up-ssh-tunneling}
7878

7979
You can specify SSH tunneling details if your source MySQL database is not publicly accessible.
8080

docs/integrations/data-ingestion/clickpipes/mysql/source/aurora.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ Then click on `Save Changes` in the top-right. You may need to reboot your insta
7373
If you have a MySQL cluster, the above parameters would be found in a [DB Cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_WorkingWithParamGroups.CreatingCluster.html) parameter group and not the DB instance group.
7474
:::
7575

76-
## Enabling GTID Mode {#gtid-mode-aurora}
76+
## Enabling GTID mode {#gtid-mode-aurora}
7777
Global Transaction Identifiers (GTIDs) are unique IDs assigned to each committed transaction in MySQL. They simplify binlog replication and make troubleshooting more straightforward.
7878

7979
If your MySQL instance is MySQL 5.7, 8.0 or 8.4, we recommend enabling GTID mode so that the MySQL ClickPipe can use GTID replication.

0 commit comments

Comments
 (0)