Skip to content

Commit fea4417

Browse files
committed
spelling fixes
1 parent 1c677d9 commit fea4417

File tree

108 files changed

+420
-213
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

108 files changed

+420
-213
lines changed

docs/en/about-us/adopters.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,7 @@ The following list of companies using ClickHouse and their success stories is as
132132
| [DNSMonster](https://dnsmonster.dev/) | Software & Technology | DNS Monitoring ||| [GitHub Repository](https://github.com/mosajjal/dnsmonster) |
133133
| [Darwinium](https://www.darwinium.com/) | Software & Technology | Security and Fraud Analytics ||| [Blog Post, July 2022](https://clickhouse.com/blog/fast-feature-rich-and-mutable-clickhouse-powers-darwiniums-security-and-fraud-analytics-use-cases) |
134134
| [Dash0](https://www.dash0.com/) | APM Platform | Main product ||| [Careers page](https://careers.dash0.com/senior-product-engineer-backend/en) |
135-
| [Dashdive](https://www.dashdive.com/) | Infrastructure management | Analytics ||| [Hackernews, 2024](https://news.ycombinator.com/item?id=39178753) |
135+
| [Dashdive](https://www.dashdive.com/) | Infrastructure management | Analytics ||| [Hacker News, 2024](https://news.ycombinator.com/item?id=39178753) |
136136
| [Dassana](https://lake.dassana.io/) | Cloud data platform | Main product | - | - | [Blog Post, Jan 2023](https://clickhouse.com/blog/clickhouse-powers-dassanas-security-data-lake) [Direct reference, April 2022](https://news.ycombinator.com/item?id=31111432) |
137137
| [Datafold](https://www.datafold.com/) | Data Reliability Platform |||| [Job advertisement, April 2022](https://www.datafold.com/careers) |
138138
| [Dataliance for China Telecom](https://www.chinatelecomglobal.com/) | Telecom | Analytics ||| [Slides in Chinese, January 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/telecom.pdf) |
@@ -146,7 +146,7 @@ The following list of companies using ClickHouse and their success stories is as
146146
| [Didi](https://web.didiglobal.com/) | Transportation & Ride Sharing | Observability | 400+ logging, 40 tracing | PBs/day / 40GB/s write throughput, 15M queries/day, 200 QPS peak | [Blog, Apr 2024](https://clickhouse.com/blog/didi-migrates-from-elasticsearch-to-clickHouse-for-a-new-generation-log-storage-system) |
147147
| [DigiCert](https://www.digicert.com) | Network Security | DNS Platform || over 35 billion events per day | [Job posting, Aug 2022](https://www.indeed.com/viewjob?t=Senior+Principal+Software+Engineer+Architect&c=DigiCert&l=Lehi,+UT&jk=403c35f96c46cf37&rtk=1g9mnof7qk7dv800) |
148148
| [Disney+](https://www.disneyplus.com/) | Video Streaming | Analytics || 395 TiB | [Meetup Video, December 2022](https://www.youtube.com/watch?v=CVVp6N8Xeoc&list=PL0Z2YDlm0b3iNDUzpY1S3L_iV4nARda_U&index=8) [Slides, December 2022](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup67/Disney%20plus%20ClickHouse.pdf) |
149-
| [Dittofeed](https://dittofeed.com/) | Software & Technology | Open Source Customer Engagement ||| [Hackernews, June 2023](https://news.ycombinator.com/item?id=36061344) |
149+
| [Dittofeed](https://dittofeed.com/) | Software & Technology | Open Source Customer Engagement ||| [Hacker News, June 2023](https://news.ycombinator.com/item?id=36061344) |
150150
| [Diva-e](https://www.diva-e.com) | Digital consulting | Main Product ||| [Slides in English, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup29/ClickHouse-MeetUp-Unusual-Applications-sd-2019-09-17.pdf) |
151151
| [Dolphin Emulator](https://dolphin-emu.org/) | Games | Analytics ||| [Twitter, September 2022](https://twitter.com/delroth_/status/1567300096160665601) |
152152
| [DoorDash](https://www.doordash.com/home) | E-commerce | Monitoring ||| [Meetup, December 2024](https://github.com/ClickHouse/clickhouse-presentations/blob/master/2024-meetup-san-francisco/Clickhouse%20Meetup%20Slides%20(1).pdf) |

docs/en/chdb/guides/jupysql.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ description: How to install chDB for Bun
66
keywords: [chdb, jupysql]
77
---
88

9-
[JupySQL](https://jupysql.ploomber.io/en/latest/quick-start.html) is a Python library that lets you run SQL in Jupyter notebooks and the iPython shell.
9+
[JupySQL](https://jupysql.ploomber.io/en/latest/quick-start.html) is a Python library that lets you run SQL in Jupyter notebooks and the IPython shell.
1010
In this guide, we're going to learn how to query data using chDB and JupySQL.
1111

1212
<div class='vimeo-container'>
@@ -22,13 +22,13 @@ python -m venv .venv
2222
source .venv/bin/activate
2323
```
2424

25-
And then, we'll install JupySQL, iPython, and Jupyter Lab:
25+
And then, we'll install JupySQL, IPython, and Jupyter Lab:
2626

2727
```bash
2828
pip install jupysql ipython jupyterlab
2929
```
3030

31-
We can use JupySQL in iPython, which we can launch by running:
31+
We can use JupySQL in IPython, which we can launch by running:
3232

3333
```bash
3434
ipython

docs/en/chdb/guides/query-remote-clickhouse.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ You can also use the code in a Python script or in your favorite notebook.
4141
## An intro to ClickPy
4242

4343
The remote ClickHouse server that we're going to query is [ClickPy](https://clickpy.clickhouse.com).
44-
ClickPy keeps track of all the downloads of PyPi packages and lets you explore the stats of packages via a UI.
44+
ClickPy keeps track of all the downloads of PyPI packages and lets you explore the stats of packages via a UI.
4545
The underlying database is available to query using the `play` user.
4646

4747
You can learn more about ClickPy in [its GitHub repository](https://github.com/ClickHouse/clickpy).

docs/en/chdb/guides/querying-apache-arrow.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: How to query Apache Arrow with chDB
33
sidebar_label: Querying Apache Arrow
44
slug: /en/chdb/guides/apache-arrow
55
description: In this guide, we'll learn how to query Apache Arrow tables with chDB
6-
keywords: [chdb, apache-arrow]
6+
keywords: [chdb, Apache Arrow]
77
---
88

99
[Apache Arrow](https://arrow.apache.org/) is a standardized column-oriented memory format that's gained popularity in the data community.
@@ -25,7 +25,7 @@ Make sure you have version 2.0.2 or higher:
2525
pip install "chdb>=2.0.2"
2626
```
2727

28-
And now we're going to install pyarrow, pandas, and ipython:
28+
And now we're going to install PyArrow, pandas, and ipython:
2929

3030
```bash
3131
pip install pyarrow pandas ipython
@@ -55,7 +55,7 @@ If you want to download more files, use `aws s3 ls` to get a list of all the fil
5555

5656

5757

58-
Next, we'll import the Parquet module from the pyarrow package:
58+
Next, we'll import the Parquet module from the `pyarrow` package:
5959

6060
```python
6161
import pyarrow.parquet as pq

docs/en/chdb/guides/querying-parquet.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Make sure you have version 2.0.2 or higher:
2525
pip install "chdb>=2.0.2"
2626
```
2727

28-
And now we're going to install iPython:
28+
And now we're going to install IPython:
2929

3030
```bash
3131
pip install ipython

docs/en/chdb/guides/querying-s3-bucket.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Make sure you have version 2.0.2 or higher:
2525
pip install "chdb>=2.0.2"
2626
```
2727

28-
And now we're going to install iPython:
28+
And now we're going to install IPython:
2929

3030
```bash
3131
pip install ipython

docs/en/chdb/install/python.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -207,7 +207,7 @@ chdb.query(
207207

208208
1. You must inherit from chdb.PyReader class and implement the `read` method.
209209
2. The `read` method should:
210-
1. return a list of lists, the first demension is the column, the second dimension is the row, the columns order should be the same as the first arg `col_names` of `read`.
210+
1. return a list of lists, the first dimension is the column, the second dimension is the row, the columns order should be the same as the first arg `col_names` of `read`.
211211
1. return an empty list when there is no more data to read.
212212
1. be stateful, the cursor should be updated in the `read` method.
213213
3. An optional `get_schema` method can be implemented to return the schema of the table. The prototype is `def get_schema(self) -> List[Tuple[str, str]]:`, the return value is a list of tuples, each tuple contains the column name and the column type. The column type should be one of [the following](/en/sql-reference/data-types).
@@ -247,7 +247,7 @@ See also: [test_query_py.py](https://github.com/chdb-io/chdb/blob/main/tests/tes
247247

248248
## Limitations
249249

250-
1. Column types supported: pandas.Series, pyarrow.array, chdb.PyReader
250+
1. Column types supported: `pandas.Series`, `pyarrow.array`,`chdb.PyReader`
251251
1. Data types supported: Int, UInt, Float, String, Date, DateTime, Decimal
252252
1. Python Object type will be converted to String
253253
1. Pandas DataFrame performance is all of the best, Arrow Table is better than PyReader

docs/en/cloud/bestpractices/asyncinserts.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ sidebar_label: Asynchronous Inserts
44
title: Asynchronous Inserts (async_insert)
55
---
66

7-
Inserting data into ClickHouse in large batches is a best practice. It saves compute cycles and disk I/O, and therefore it saves money. If your usecase allows you to batch your inserts external to ClickHouse, then that is one option. If you would like ClickHouse to create the batches, then you can use the asynchronous INSERT mode described here.
7+
Inserting data into ClickHouse in large batches is a best practice. It saves compute cycles and disk I/O, and therefore it saves money. If your use case allows you to batch your inserts external to ClickHouse, then that is one option. If you would like ClickHouse to create the batches, then you can use the asynchronous INSERT mode described here.
88

99
Use asynchronous inserts as an alternative to both batching data on the client-side and keeping the insert rate at around one insert query per second by enabling the [async_insert](/docs/en/operations/settings/settings.md/#async_insert) setting. This causes ClickHouse to handle the batching on the server-side.
1010

docs/en/cloud/bestpractices/usagelimits.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ title: Usage Limits
99
Clickhouse is very fast and reliable, but any database has its limits. For example, having too many tables or databases could negatively affect performance. To avoid that, Clickhouse Cloud has guardrails for several types of items.
1010

1111
:::tip
12-
If you've reached one of those limits, it may mean that you are implementing your use case in an unoptimised way. You can contact our support so we can help you refine your use case to avoid going through the limits or to increase the limits in a guided way.
12+
If you've reached one of those limits, it may mean that you are implementing your use case in an unoptimized way. You can contact our support so we can help you refine your use case to avoid going through the limits or to increase the limits in a guided way.
1313
:::
1414

1515
# Tables

docs/en/cloud/changelogs/changelog-24-10.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
slug: /en/changelogs/24.10
33
title: v24.10 Changelog for Cloud
44
description: Fast release changelog for v24.10
5-
keywords: [chaneglog, cloud]
5+
keywords: [changelog, cloud]
66
---
77

88
Relevant changes for ClickHouse Cloud services based on the v24.10 release.
@@ -39,7 +39,7 @@ Relevant changes for ClickHouse Cloud services based on the v24.10 release.
3939
- Adding `system.projections` table to track available projections. [#68901](https://github.com/ClickHouse/ClickHouse/pull/68901) ([Jordi Villar](https://github.com/jrdi)).
4040
- Add support for `arrayUnion` function. [#68989](https://github.com/ClickHouse/ClickHouse/pull/68989) ([Peter Nguyen](https://github.com/petern48)).
4141
- Add new function `arrayZipUnaligned` for spark compatiablity(arrays_zip), which allowed unaligned arrays based on original `arrayZip`. ``` sql SELECT arrayZipUnaligned([1], [1, 2, 3]). [#69030](https://github.com/ClickHouse/ClickHouse/pull/69030) ([李扬](https://github.com/taiyang-li)).
42-
- Support aggreate function `quantileExactWeightedInterpolated`, which is a interpolated version based on quantileExactWeighted. Some people may wonder why we need a new `quantileExactWeightedInterpolated` since we already have `quantileExactInterpolatedWeighted`. The reason is the new one is more accurate than the old one. BTW, it is for spark compatiability in Apache Gluten. [#69619](https://github.com/ClickHouse/ClickHouse/pull/69619) ([李扬](https://github.com/taiyang-li)).
42+
- Support aggregate function `quantileExactWeightedInterpolated`, which is a interpolated version based on quantileExactWeighted. Some people may wonder why we need a new `quantileExactWeightedInterpolated` since we already have `quantileExactInterpolatedWeighted`. The reason is the new one is more accurate than the old one. BTW, it is for spark compatibility in Apache Gluten. [#69619](https://github.com/ClickHouse/ClickHouse/pull/69619) ([李扬](https://github.com/taiyang-li)).
4343
- Support function arrayElementOrNull. It returns null if array index is out of range or map key not found. [#69646](https://github.com/ClickHouse/ClickHouse/pull/69646) ([李扬](https://github.com/taiyang-li)).
4444
- Support Dynamic type in most functions by executing them on internal types inside Dynamic. [#69691](https://github.com/ClickHouse/ClickHouse/pull/69691) ([Pavel Kruglov](https://github.com/Avogar)).
4545
- Adds argument `scale` (default: `true`) to function `arrayAUC` which allows to skip the normalization step (issue [#69609](https://github.com/ClickHouse/ClickHouse/issues/69609)). [#69717](https://github.com/ClickHouse/ClickHouse/pull/69717) ([gabrielmcg44](https://github.com/gabrielmcg44)).

0 commit comments

Comments
 (0)