Skip to content

Commit 524d702

Browse files
committed
more spellings
1 parent 9e97393 commit 524d702

File tree

34 files changed

+121
-91
lines changed

34 files changed

+121
-91
lines changed

docs/en/chdb/install/nodejs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Installing chDB for NodeJS
33
sidebar_label: NodeJS
44
slug: /en/chdb/install/nodejs
55
description: How to install chDB for NodeJS
6-
keywords: [chdb, embedded, clickhouse-lite, nodejs, install]
6+
keywords: [chdb, embedded, clickhouse-lite, NodeJS, install]
77
---
88

99
# Installing chDB for NodeJS

docs/en/cloud/security/azure-privatelink.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ This guide shows how to use Azure Private Link to provide private connectivity v
1616

1717
![Overview of PrivateLink](@site/docs/en/cloud/security/images/azure-pe.png)
1818

19-
Unlike AWS and GCP, Azure supports cross-region connectivity via Private Link. This enables you to establish connections between VNETs located in different regions where you have ClickHouse services deployed.
19+
Unlike AWS and GCP, Azure supports cross-region connectivity via Private Link. This enables you to establish connections between VNets located in different regions where you have ClickHouse services deployed.
2020

2121
:::note
2222
Additional charges may be applied to inter-region traffic. Please check latest Azure documentation.
@@ -103,7 +103,7 @@ In the following screen, specify the following options:
103103

104104
- **Subscription** / **Resource Group**: Please choose the Azure subscription and resource group for the Private Endpoint.
105105
- **Name**: Set a name for the **Private Endpoint**.
106-
- **Region**: Choose region where the deployed VNET that will be connected to ClickHouse Cloud via Private Link.
106+
- **Region**: Choose region where the deployed VNet that will be connected to ClickHouse Cloud via Private Link.
107107

108108
After you have completed the above steps, click the **Next: Resource** button.
109109

@@ -113,15 +113,15 @@ After you have completed the above steps, click the **Next: Resource** button.
113113

114114
Select the option **Connect to an Azure resource by resource ID or alias**.
115115

116-
For the **Resource ID or alias**, use the **endpointServiceId** you have obtained from the [Obtain Azure connection alias for Private Link](#obtain-azure-connection-alias-for-private-link) step.
116+
For the **Resource ID or alias**, use the `endpointServiceId` you have obtained from the [Obtain Azure connection alias for Private Link](#obtain-azure-connection-alias-for-private-link) step.
117117

118118
Click **Next: Virtual Network** button.
119119

120120
![PE resource](@site/docs/en/cloud/security/images/azure-pe-resource.png)
121121

122122
---
123123

124-
- **Virtual network**: Choose the VNET you want to connect to ClickHouse Cloud using Private Link
124+
- **Virtual network**: Choose the VNet you want to connect to ClickHouse Cloud using Private Link
125125
- **Subnet**: Choose the subnet where Private Endpoint will be created
126126

127127
Optional:
@@ -189,7 +189,7 @@ Under properties, find `resourceGuid` field and copy this value:
189189

190190
## Setting up DNS for Private Link
191191

192-
You need will need to create a Private DNS zone (`${location_code}.privatelink.azure.clickhouse.cloud`) and attach it to your VNET to access resources via Private Link.
192+
You need will need to create a Private DNS zone (`${location_code}.privatelink.azure.clickhouse.cloud`) and attach it to your VNet to access resources via Private Link.
193193

194194
### Create Private DNS zone
195195

@@ -214,7 +214,7 @@ Create a wildcard record and point to your Private Endpoint:
214214

215215
**Option 1: Using Azure Portal**
216216

217-
1. Open the MyAzureResourceGroup resource group and select the `${region_code}.privatelink.azure.clickhouse.cloud` private zone.
217+
1. Open the `MyAzureResourceGroup` resource group and select the `${region_code}.privatelink.azure.clickhouse.cloud` private zone.
218218
2. Select + Record set.
219219
3. For Name, type `*`.
220220
4. For IP Address, type the IP address you see for Private Endpoint.
@@ -259,7 +259,7 @@ resource "azurerm_private_dns_zone_virtual_network_link" "example" {
259259

260260
### Verify DNS setup
261261

262-
Any record within the westus3.privatelink.azure.clickhouse.cloud domain should be pointed to the Private Endpoint IP. (10.0.0.4 in this example).
262+
Any record within the `westus3.privatelink.azure.clickhouse.cloud` domain should be pointed to the Private Endpoint IP. (10.0.0.4 in this example).
263263

264264
```bash
265265
nslookup instance-id.westus3.privatelink.azure.clickhouse.cloud.
@@ -408,7 +408,7 @@ curl --silent --user ${KEY_ID:?}:${KEY_SECRET:?} -X PATCH -H "Content-Type: appl
408408
Each service with Private Link enabled has a public and private endpoint. In order to connect using Private Link, you need to use a private endpoint which will be `privateDnsHostname`.
409409

410410
:::note
411-
Private DNS hostname is only available from your Azure VNET. Do not try to resolve the DNS host from a machine that resides outside of Azure VNET.
411+
Private DNS hostname is only available from your Azure VNet. Do not try to resolve the DNS host from a machine that resides outside of Azure VNet.
412412
:::
413413

414414
### Obtaining the Private DNS Hostname

docs/en/faq/operations/production.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ Here are some key points to get reasonable fidelity in a pre-production environm
3434

3535
The second area to invest in is **automated testing infrastructure**. Don’t assume that if some kind of query has executed successfully once, it’ll continue to do so forever. It’s OK to have some unit tests where ClickHouse is mocked, but make sure your product has a reasonable set of automated tests that are run against real ClickHouse and check that all important use cases are still working as expected.
3636

37-
An extra step forward could be contributing those automated tests to [ClickHouse’s open-source test infrastructure](https://github.com/ClickHouse/ClickHouse/tree/master/tests) that are continuously used in its day-to-day development. It definitely will take some additional time and effort to learn [how to run it](../../development/tests.md) and then how to adapt your tests to this framework, but it’ll pay off by ensuring that ClickHouse releases are already tested against them when they are announced stable, instead of repeatedly losing time on reporting the issue after the fact and then waiting for a bugfix to be implemented, backported and released. Some companies even have such test contributions to infrastructure by its use as an internal policy, (called [Beyonces Rule](https://www.oreilly.com/library/view/software-engineering-at/9781492082781/ch01.html#policies_that_scale_well) at Google).
37+
An extra step forward could be contributing those automated tests to [ClickHouse’s open-source test infrastructure](https://github.com/ClickHouse/ClickHouse/tree/master/tests) that are continuously used in its day-to-day development. It definitely will take some additional time and effort to learn [how to run it](../../development/tests.md) and then how to adapt your tests to this framework, but it’ll pay off by ensuring that ClickHouse releases are already tested against them when they are announced stable, instead of repeatedly losing time on reporting the issue after the fact and then waiting for a bugfix to be implemented, backported and released. Some companies even have such test contributions to infrastructure by its use as an internal policy, (called [Beyonce's Rule](https://www.oreilly.com/library/view/software-engineering-at/9781492082781/ch01.html#policies_that_scale_well) at Google).
3838

3939
When you have your pre-production environment and testing infrastructure in place, choosing the best version is straightforward:
4040

@@ -54,7 +54,7 @@ If you look into the contents of the ClickHouse package repository, you’ll see
5454

5555
Here is some guidance on how to choose between them:
5656

57-
- `stable` is the kind of package we recommend by default. They are released roughly monthly (and thus provide new features with reasonable delay) and three latest stable releases are supported in terms of diagnostics and backporting of bugfixes.
57+
- `stable` is the kind of package we recommend by default. They are released roughly monthly (and thus provide new features with reasonable delay) and three latest stable releases are supported in terms of diagnostics and backporting of bug fixes.
5858
- `lts` are released twice a year and are supported for a year after their initial release. You might prefer them over `stable` in the following cases:
5959
- Your company has some internal policies that do not allow for frequent upgrades or using non-LTS software.
6060
- You are using ClickHouse in some secondary products that either do not require any complex ClickHouse features or do not have enough resources to keep it updated.

docs/en/guides/sre/configuring-ssl.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,9 +23,9 @@ This guide was written using Ubuntu 20.04 and ClickHouse installed on the follow
2323

2424
|Host |IP Address|
2525
|--------|-------------|
26-
|chnode1 |192.168.1.221|
27-
|chnode2 |192.168.1.222|
28-
|chnode3 |192.168.1.223|
26+
|`chnode1` |192.168.1.221|
27+
|`chnode2` |192.168.1.222|
28+
|`chnode3` |192.168.1.223|
2929

3030

3131
:::note

docs/en/guides/sre/keeper/index.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -680,7 +680,7 @@ This guide provides simple and minimal settings to configure ClickHouse Keeper w
680680

681681
### 1. Configure Nodes with Keeper settings
682682

683-
1. Install 3 ClickHouse instances on 3 hosts (chnode1, chnode2, chnode3). (View the [Quick Start](/docs/en/getting-started/install.md) for details on installing ClickHouse.)
683+
1. Install 3 ClickHouse instances on 3 hosts (`chnode1`, `chnode2`, `chnode3`). (View the [Quick Start](/docs/en/getting-started/install.md) for details on installing ClickHouse.)
684684

685685
2. On each node, add the following entry to allow external communication through the network interface.
686686
```xml
@@ -731,7 +731,7 @@ This guide provides simple and minimal settings to configure ClickHouse Keeper w
731731
|server |definition of server participating|list of each server definition|
732732
|raft_configuration| settings for each server in the keeper cluster| server and settings for each|
733733
|id |numeric id of the server for keeper services|1|
734-
|hostname |hostname, IP or FQDN of each server in the keeper cluster|chnode1.domain.com|
734+
|hostname |hostname, IP or FQDN of each server in the keeper cluster|`chnode1.domain.com`|
735735
|port|port to listen on for interserver keeper connections|9234|
736736

737737

@@ -758,7 +758,7 @@ This guide provides simple and minimal settings to configure ClickHouse Keeper w
758758
|Parameter |Description |Example |
759759
|----------|------------------------------|---------------------|
760760
|node |list of nodes for ClickHouse Keeper connections|settings entry for each server|
761-
|host|hostname, IP or FQDN of each ClickHouse keeper node| chnode1.domain.com|
761+
|host|hostname, IP or FQDN of each ClickHouse keeper node| `chnode1.domain.com`|
762762
|port|ClickHouse Keeper client port| 9181|
763763

764764
5. Restart ClickHouse and verify that each Keeper instance is running. Execute the following command on each server. The `ruok` command returns `imok` if Keeper is running and healthy:
@@ -814,10 +814,10 @@ This guide provides simple and minimal settings to configure ClickHouse Keeper w
814814
|----------|------------------------------|---------------------|
815815
|shard |list of replicas on the cluster definition|list of replicas for each shard|
816816
|replica|list of settings for each replica|settings entries for each replica|
817-
|host|hostname, IP or FQDN of server that will host a replica shard|chnode1.domain.com|
817+
|host|hostname, IP or FQDN of server that will host a replica shard|`chnode1.domain.com`|
818818
|port|port used to communicate using the native tcp protocol|9000|
819819
|user|username that will be used to authenticate to the cluster instances|default|
820-
|password|password for the user define to allow connections to cluster instances|ClickHouse123!|
820+
|password|password for the user define to allow connections to cluster instances|`ClickHouse123!`|
821821

822822

823823
2. Restart ClickHouse and verify the cluster was created:
@@ -956,9 +956,9 @@ a single ClickHouse shard made up of two replicas.
956956

957957
|node|description|
958958
|-----|-----|
959-
|chnode1.marsnet.local|data node - cluster cluster_1S_2R|
960-
|chnode2.marsnet.local|data node - cluster cluster_1S_2R|
961-
|chnode3.marsnet.local| ClickHouse Keeper tie breaker node|
959+
|`chnode1.marsnet.local`|data node - cluster `cluster_1S_2R`|
960+
|`chnode2.marsnet.local`|data node - cluster `cluster_1S_2R`|
961+
|`chnode3.marsnet.local`| ClickHouse Keeper tie breaker node|
962962

963963
Example config for cluster:
964964
```xml

docs/en/integrations/data-ingestion/apache-spark/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ sidebar_label: Integrating Apache Spark with ClickHouse
33
sidebar_position: 1
44
slug: /en/integrations/apache-spark
55
description: Introduction to Apache Spark with ClickHouse
6-
keywords: [ clickhouse, apache, spark, migrating, data ]
6+
keywords: [ clickhouse, Apache Spark, migrating, data ]
77
---
88

99
import Tabs from '@theme/Tabs';

docs/en/integrations/data-ingestion/apache-spark/spark-native-connector.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ sidebar_label: Spark Native Connector
33
sidebar_position: 2
44
slug: /en/integrations/apache-spark/spark-native-connector
55
description: Introduction to Apache Spark with ClickHouse
6-
keywords: [ clickhouse, apache, spark, migrating, data ]
6+
keywords: [ clickhouse, Apache Spark, migrating, data ]
77
---
88

99
import Tabs from '@theme/Tabs';

docs/en/integrations/data-ingestion/data-formats/parquet.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,7 @@ DESCRIBE TABLE imported_from_parquet;
125125
└──────┴──────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
126126
```
127127

128-
By default, ClickHouse is strict with column names, types, and values. But sometimes, we can skip unexistent columns or unsupported values during import. This can be managed with [Parquet settings](/docs/en/interfaces/formats.md/#parquet-format-settings).
128+
By default, ClickHouse is strict with column names, types, and values. But sometimes, we can skip nonexistent columns or unsupported values during import. This can be managed with [Parquet settings](/docs/en/interfaces/formats.md/#parquet-format-settings).
129129

130130

131131
## Exporting to Parquet format

docs/en/integrations/data-ingestion/data-sources-index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
slug: /en/integrations/index
3-
keywords: [AWS S3, PostgreSQL, Kafka, Apache Spark, MySQL, Cassandra, Redis, RabbitMQ, MongoDB, Google Cloud Storage, Hive, Hudi, Iceberg, MiniIO, Delta Lake, RocksDB, Splunk, SQlite, NATS, EMQX, local files, JDBC, ODBC]
3+
keywords: [AWS S3, PostgreSQL, Kafka, Apache Spark, MySQL, Cassandra, Redis, RabbitMQ, MongoDB, Google Cloud Storage, Hive, Hudi, Iceberg, MinIO, Delta Lake, RocksDB, Splunk, SQLite, NATS, EMQX, local files, JDBC, ODBC]
44
description: Datasources overview page
55
---
66

@@ -28,7 +28,7 @@ For further information see the pages listed below:
2828
| [Delta Lake](/docs/en/integrations/deltalake) |
2929
| [RocksDB](/docs/en/integrations/rocksdb) |
3030
| [Splunk](/docs/en/integrations/splunk) |
31-
| [SQlite](/docs/en/integrations/sqlite) |
31+
| [SQLite](/docs/en/integrations/sqlite) |
3232
| [NATS](/docs/en/integrations/nats) |
3333
| [EMQX](/docs/en/integrations/emqx) |
3434
| [Insert Local Files](/docs/en/integrations/data-ingestion/insert-local-files) |

docs/en/integrations/data-ingestion/dbms/dynamodb/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ Data will be ingested into a `ReplacingMergeTree`. This table engine is commonly
2727
First, you will want to enable a Kinesis stream on your DynamoDB table to capture changes in real-time. We want to do this before we create the snapshot to avoid missing any data.
2828
Find the AWS guide located [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/kds.html).
2929

30-
![DynamoDB Kenesis Stream](../images/dynamodb-kinesis-stream.png)
30+
![DynamoDB Kinesis Stream](../images/dynamodb-kinesis-stream.png)
3131

3232
## 2. Create the snapshot
3333

0 commit comments

Comments
 (0)