Skip to content

Commit 9e97393

Browse files
committed
more spelling fixes!
1 parent fea4417 commit 9e97393

File tree

68 files changed

+450
-361
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

68 files changed

+450
-361
lines changed

docs/en/chdb/guides/jupysql.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: JupySQL and chDB
33
sidebar_label: JupySQL
44
slug: /en/chdb/guides/jupysql
55
description: How to install chDB for Bun
6-
keywords: [chdb, jupysql]
6+
keywords: [chdb, JupySQL]
77
---
88

99
[JupySQL](https://jupysql.ploomber.io/en/latest/quick-start.html) is a Python library that lets you run SQL in Jupyter notebooks and the IPython shell.
@@ -65,7 +65,7 @@ for file in files:
6565

6666
## Configuring chDB and JupySQL
6767

68-
Next, let's import chDB's `dbapi` module:
68+
Next, let's import the `dbapi` module for chDB:
6969

7070
```python
7171
from chdb import dbapi
@@ -168,7 +168,7 @@ The default database doesn't persist data on disk, so we need to create another
168168
%sql CREATE DATABASE atp
169169
```
170170

171-
And now we're going to create a table called `rankings` whos schema will be derived from the structure of the data in the CSV files:
171+
And now we're going to create a table called `rankings` whose schema will be derived from the structure of the data in the CSV files:
172172

173173
```python
174174
%%sql

docs/en/chdb/guides/query-remote-clickhouse.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,7 @@ df.head(n=5)
150150
4 2018-03-02 5 23842
151151
```
152152

153-
We can then compute the ratio of Open AI downloads to scikit-learn downloads like this:
153+
We can then compute the ratio of Open AI downloads to `scikit-learn` downloads like this:
154154

155155
```python
156156
df['ratio'] = df['y_openai'] / df['y_sklearn']

docs/en/chdb/install/python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -165,7 +165,7 @@ Some notes on the chDB Python UDF (User Defined Function) decorator.
165165
import json
166166
...
167167
```
168-
6. The Python interpertor used is the same as the one used to run the script. You can get it from `sys.executable`.
168+
6. The Python interpreter used is the same as the one used to run the script. You can get it from `sys.executable`.
169169

170170
see also: [test_udf.py](https://github.com/chdb-io/chdb/blob/main/tests/test_udf.py).
171171

docs/en/chdb/reference/data-formats.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Output formats are used to arrange the results of a `SELECT`, and to perform `IN
1313
As well as the data formats that ClickHouse supports, chDB also supports:
1414

1515
- `ArrowTable` as an output format, the type is Python `pyarrow.Table`
16-
- `DataFrame` as an input and output format, the type is Python `pandas.DataFrame`. For examples, see [test_joindf.py](https://github.com/chdb-io/chdb/blob/main/tests/test_joindf.py)
16+
- `DataFrame` as an input and output format, the type is Python `pandas.DataFrame`. For examples, see [`test_joindf.py`](https://github.com/chdb-io/chdb/blob/main/tests/test_joindf.py)
1717
- `Debug` as ab output (as an alias of `CSV`), but with enabled debug verbose output from ClickHouse.
1818

1919
The supported data formats from ClickHouse are:

docs/en/cloud/bestpractices/asyncinserts.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ There are two possible conditions that can cause ClickHouse to flush the buffer
2020
- buffer size has reached N bytes in size (N is configurable via [async_insert_max_data_size](/docs/en/operations/settings/settings.md/#async_insert_max_data_size))
2121
- at least N ms has passed since the last buffer flush (N is configurable via [async_insert_busy_timeout_max_ms](/docs/en/operations/settings/settings.md/#async_insert_busy_timeout_max_ms))
2222

23-
Everytime any of the conditions above are met, ClickHouse will flush its in-memory buffer to disk.
23+
Any time any of the conditions above are met, ClickHouse will flush its in-memory buffer to disk.
2424

2525
:::note
2626
Your data is available for read queries once the data is written to a part on storage. Keep this in mind for when you want to modify the `async_insert_busy_timeout_ms` (set as 1 second by default) or the `async_insert_max_data_size` (set as 10 MiB by default) settings.

docs/en/cloud/changelogs/changelog-24-10.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ Relevant changes for ClickHouse Cloud services based on the v24.10 release.
99

1010
## Backward Incompatible Change
1111
- Allow to write `SETTINGS` before `FORMAT` in a chain of queries with `UNION` when subqueries are inside parentheses. This closes [#39712](https://github.com/ClickHouse/ClickHouse/issues/39712). Change the behavior when a query has the SETTINGS clause specified twice in a sequence. The closest SETTINGS clause will have a preference for the corresponding subquery. In the previous versions, the outermost SETTINGS clause could take a preference over the inner one. [#60197](https://github.com/ClickHouse/ClickHouse/pull/60197)[#68614](https://github.com/ClickHouse/ClickHouse/pull/68614) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
12-
- Reimplement Dynamic type. Now when the limit of dynamic data types is reached new types are not casted to String but stored in a special data structure in binary format with binary encoded data type. Now any type ever inserted into Dynamic column can be read from it as subcolumn. [#68132](https://github.com/ClickHouse/ClickHouse/pull/68132) ([Pavel Kruglov](https://github.com/Avogar)).
12+
- Reimplement Dynamic type. Now when the limit of dynamic data types is reached new types are not cast to String but stored in a special data structure in binary format with binary encoded data type. Now any type ever inserted into Dynamic column can be read from it as subcolumn. [#68132](https://github.com/ClickHouse/ClickHouse/pull/68132) ([Pavel Kruglov](https://github.com/Avogar)).
1313
- Expressions like `a[b].c` are supported for named tuples, as well as named subscripts from arbitrary expressions, e.g., `expr().name`. This is useful for processing JSON. This closes [#54965](https://github.com/ClickHouse/ClickHouse/issues/54965). In previous versions, an expression of form `expr().name` was parsed as `tupleElement(expr(), name)`, and the query analyzer was searching for a column `name` rather than for the corresponding tuple element; while in the new version, it is changed to `tupleElement(expr(), 'name')`. In most cases, the previous version was not working, but it is possible to imagine a very unusual scenario when this change could lead to incompatibility: if you stored names of tuple elements in a column or an alias, that was named differently than the tuple element's name: `SELECT 'b' AS a, CAST([tuple(123)] AS 'Array(Tuple(b UInt8))') AS t, t[1].a`. It is very unlikely that you used such queries, but we still have to mark this change as potentially backward incompatible. [#68435](https://github.com/ClickHouse/ClickHouse/pull/68435) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
1414
- When the setting `print_pretty_type_names` is enabled, it will print `Tuple` data type in a pretty form in `SHOW CREATE TABLE` statements, `formatQuery` function, and in the interactive mode in `clickhouse-client` and `clickhouse-local`. In previous versions, this setting was only applied to `DESCRIBE` queries and `toTypeName`. This closes [#65753](https://github.com/ClickHouse/ClickHouse/issues/65753). [#68492](https://github.com/ClickHouse/ClickHouse/pull/68492) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
1515
- Reordering of filter conditions from `[PRE]WHERE` clause is now allowed by default. It could be disabled by setting `allow_reorder_prewhere_conditions` to `false`. [#70657](https://github.com/ClickHouse/ClickHouse/pull/70657) ([Nikita Taranov](https://github.com/nickitat)).

docs/en/cloud/get-started/query-insights.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,13 +27,13 @@ Beneath the top-level metrics, a table displays query log entries (grouped by no
2727

2828
![Query Insights UI Recent Queries Table](@site/docs/en/cloud/images/sqlconsole/insights_recent.png)
2929

30-
Recent queries can be filtered and sorted by any available field. The table can also be configured to display orhide additional fields such as tables, p90, and p99 latencies.
30+
Recent queries can be filtered and sorted by any available field. The table can also be configured to display or hide additional fields such as tables, p90, and p99 latencies.
3131

3232
## Query drill-down
3333

3434
Selecting a query from the recent queries table will open a flyout containing metrics and information specific to the selected query:
3535

36-
![Query Insights UI Query Drilldown](@site/docs/en/cloud/images/sqlconsole/insights_drilldown.png)
36+
![Query Insights UI Query Drill down](@site/docs/en/cloud/images/sqlconsole/insights_drilldown.png)
3737

3838
As we can see from the flyout, this particular query has been run more than 3000 times in the last 24 hours. All metrics in the **Query info** tab are aggregated metrics, but we can also view metrics from individual runs by selecting the **Query history** tab:
3939

docs/en/cloud/manage/backups.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Database backups provide a safety net by ensuring that if data is lost for any u
1717

1818
## How backups work in ClickHouse Cloud
1919

20-
ClickHouse Cloud backups are a combination of "full" and "incremental" backups that constitute a backup chain. The chain starts with a full backup, and incremental backups are then taken over the next several scheduled time periods to create a sequence of backups. Once a backup chain reaches a certain length, a new chain is started. This entire chain of backups can then be utilized to restore data to a new service if needed. Once all backups included in a specific chain are past the retention timeframe set for the service (more on retention below), the chain is discarded.
20+
ClickHouse Cloud backups are a combination of "full" and "incremental" backups that constitute a backup chain. The chain starts with a full backup, and incremental backups are then taken over the next several scheduled time periods to create a sequence of backups. Once a backup chain reaches a certain length, a new chain is started. This entire chain of backups can then be utilized to restore data to a new service if needed. Once all backups included in a specific chain are past the retention time frame set for the service (more on retention below), the chain is discarded.
2121

2222
In the screenshot below, the solid line squares show full backups and the dotted line squares show incremental backups. The solid line rectangle around the squares denotes the retention period and the backups that are visible to the end user, which can be used for a backup restore. In the scenario below, backups are being taken every 24 hours and are retained for 2 days.
2323

docs/en/cloud/manage/billing/marketplace/gcp-marketplace-payg.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ Get started with ClickHouse Cloud on the [GCP Marketplace](https://console.cloud
4141
3. On the next screen, configure the subscription:
4242

4343
- The plan will default to "ClickHouse Cloud"
44-
- Subscription timeframe is "Monthly"
44+
- Subscription time frame is "Monthly"
4545
- Choose the appropriate billing account
4646
- Accept the terms and click **Subscribe**
4747

docs/en/cloud/reference/byoc.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ This section is focused on different network traffic to and from the customer BY
9797
*Inbound, Public (can be Private)*
9898

9999
The Istio ingress gateway terminates TLS. The certificate is provisioned by CertManager with Let's Encrypt and is stored as a secret within the EKS cluster. Traffic between Istio and ClickHouse is [encrypted by AWS](https://docs.aws.amazon.com/whitepapers/latest/logical-separation/encrypting-data-at-rest-and--in-transit.html#:~:text=All%20network%20traffic%20between%20AWS,supported%20Amazon%20EC2%20instance%20types) as they are in the same VPC.
100-
By default, ingress is available to the public internet with IP allowlist filtering. The customer has the option to set up VPC peering to make it private and disable public connections. We highly recommend you configure an [IP filter](/en/cloud/security/setting-ip-filters) to restrict access.
100+
By default, ingress is available to the public internet with IP allow list filtering. The customer has the option to set up VPC peering to make it private and disable public connections. We highly recommend you configure an [IP filter](/en/cloud/security/setting-ip-filters) to restrict access.
101101

102102
**Troubleshooting access**
103103

0 commit comments

Comments
 (0)