Skip to content

Commit 7cfcc75

Browse files
committed
replace dots
1 parent 054823e commit 7cfcc75

File tree

24 files changed

+45
-45
lines changed

24 files changed

+45
-45
lines changed

docs/cloud/bestpractices/avoidmutations.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ title: 'Avoid Mutations'
55
description: 'Page describing why you should avoid mutations, ALTER queries that manipulate table data through deletion or updates'
66
---
77

8-
Mutations refers to [ALTER](/sql-reference/statements/alter/) queries that manipulate table data through deletion or updates. Most notably they are queries like ALTER TABLE DELETE, UPDATE, etc. Performing such queries will produce new mutated versions of the data parts. This means that such statements would trigger a rewrite of whole data parts for all data that was inserted before the mutation, translating to a large amount of write requests.
8+
Mutations refers to [ALTER](/sql-reference/statements/alter/) queries that manipulate table data through deletion or updates. Most notably they are queries like ALTER TABLE ... DELETE, UPDATE, etc. Performing such queries will produce new mutated versions of the data parts. This means that such statements would trigger a rewrite of whole data parts for all data that was inserted before the mutation, translating to a large amount of write requests.
99

1010
For updates, you can avoid these large amounts of write requests by using specialised table engines like [ReplacingMergeTree](/engines/table-engines/mergetree-family/replacingmergetree.md) or [CollapsingMergeTree](/engines/table-engines/mergetree-family/collapsingmergetree.md) instead of the default MergeTree table engine.
1111

docs/cloud/reference/changelog.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1043,7 +1043,7 @@ This release brings an API for retrieving cloud endpoints, an advanced scaling c
10431043
- Fixed server-side parameter binding of the NULL value for Nullable types
10441044

10451045
### Bug fixes {#bug-fixes-1}
1046-
* Fixed behavior where running `INSERT INTO SELECT ` from the SQL console incorrectly applied the same row limit as select queries
1046+
* Fixed behavior where running `INSERT INTO ... SELECT ...` from the SQL console incorrectly applied the same row limit as select queries
10471047

10481048

10491049
## March 23, 2023 {#march-23-2023}

docs/cloud/reference/cloud-compatibility.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ ClickHouse Cloud provides access to a curated set of capabilities in the open so
2626
### DDL syntax {#ddl-syntax}
2727
For the most part, the DDL syntax of ClickHouse Cloud should match what is available in self-managed installs. A few notable exceptions:
2828
- Support for `CREATE AS SELECT`, which is currently not available. As a workaround, we suggest using `CREATE ... EMPTY ... AS SELECT` and then inserting into that table (see [this blog](https://clickhouse.com/blog/getting-data-into-clickhouse-part-1) for an example).
29-
- Some experimental syntax may be disabled, for instance, `ALTER TABLE MODIFY QUERY` statement.
29+
- Some experimental syntax may be disabled, for instance, `ALTER TABLE ... MODIFY QUERY` statement.
3030
- Some introspection functionality may be disabled for security purposes, for example, the `addressToLine` SQL function.
3131
- Do not use `ON CLUSTER` parameters in ClickHouse Cloud - these are not needed. While these are mostly no-op functions, they can still cause an error if you are trying to use [macros](/operations/server-configuration-parameters/settings#macros). Macros often do not work and are not needed in ClickHouse Cloud.
3232

docs/concepts/olap.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,13 +11,13 @@ slug: /concepts/olap
1111
[OLAP](https://en.wikipedia.org/wiki/Online_analytical_processing) stands for Online Analytical Processing. It is a broad term that can be looked at from two perspectives: technical and business. At the highest level, you can just read these words backward:
1212

1313
Processing
14-
: Some source data is processed
14+
: Some source data is processed...
1515

1616
Analytical
17-
: to produce some analytical reports and insights
17+
: ...to produce some analytical reports and insights...
1818

1919
Online
20-
: in real-time.
20+
: ...in real-time.
2121

2222
## OLAP from the Business Perspective {#olap-from-the-business-perspective}
2323

docs/concepts/why-clickhouse-is-so-fast.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ If a single node becomes too small to hold the table data, further nodes can be
103103

104104
<iframe width="768" height="432" src="https://www.youtube.com/embed/dccGLSuYWy0?si=rQ-Jp-z5Ik_-Rb8S" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
105105

106-
> **"ClickHouse is a freak system - you guys have 20 versions of a hash table. You guys have all these amazing things where most systems will have one hash table** **** **ClickHouse has this amazing performance because it has all these specialized components"** [Andy Pavlo, Database Professor at CMU](https://www.youtube.com/watch?v=Vy2t_wZx4Is&t=3579s)
106+
> **"ClickHouse is a freak system - you guys have 20 versions of a hash table. You guys have all these amazing things where most systems will have one hash table** **...** **ClickHouse has this amazing performance because it has all these specialized components"** [Andy Pavlo, Database Professor at CMU](https://www.youtube.com/watch?v=Vy2t_wZx4Is&t=3579s)
107107
108108
What sets ClickHouse [apart](https://www.youtube.com/watch?v=CAS2otEoerM) is its meticulous attention to low-level optimization. Building a database that simply works is one thing, but engineering it to deliver speed across diverse query types, data structures, distributions, and index configurations is where the "[freak system](https://youtu.be/Vy2t_wZx4Is?si=K7MyzsBBxgmGcuGU&t=3579)" artistry shines.
109109

docs/dictionary/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -279,7 +279,7 @@ CREATE TABLE posts_with_location
279279
(
280280
`Id` UInt32,
281281
`PostTypeId` Enum8('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8),
282-
282+
...
283283
`Location` MATERIALIZED dictGet(users_dict, 'Location', OwnerUserId::'UInt64')
284284
)
285285
ENGINE = MergeTree

docs/faq/general/olap.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,13 +11,13 @@ description: 'An explainer on what Online Analytical Processing is'
1111
[OLAP](https://en.wikipedia.org/wiki/Online_analytical_processing) stands for Online Analytical Processing. It is a broad term that can be looked at from two perspectives: technical and business. But at the very high level, you can just read these words backward:
1212

1313
Processing
14-
: Some source data is processed
14+
: Some source data is processed...
1515

1616
Analytical
17-
: to produce some analytical reports and insights
17+
: ...to produce some analytical reports and insights...
1818

1919
Online
20-
: in real-time.
20+
: ...in real-time.
2121

2222
## OLAP from the Business Perspective {#olap-from-the-business-perspective}
2323

docs/guides/developer/deduplication.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ FINAL
103103
The result only has 2 rows, and the last row inserted is the row that gets returned.
104104

105105
:::note
106-
Using `FINAL` works OK if you have a small amount of data. If you are dealing with a large amount of data, using `FINAL` is probably not the best option. Let's discuss a better option for finding the latest value of a column
106+
Using `FINAL` works OK if you have a small amount of data. If you are dealing with a large amount of data, using `FINAL` is probably not the best option. Let's discuss a better option for finding the latest value of a column...
107107
:::
108108

109109
### Avoiding FINAL {#avoiding-final}
@@ -164,7 +164,7 @@ Our [Deleting and Updating Data training module](https://learn.clickhouse.com/vi
164164

165165
## Using CollapsingMergeTree for Updating Columns Frequently {#using-collapsingmergetree-for-updating-columns-frequently}
166166

167-
Updating a column involves deleting an existing row and replacing it with new values. As you have already seen, this type of mutation in ClickHouse happens _eventually_ - during merges. If you have a lot of rows to update, it can actually be more efficient to avoid `ALTER TABLE..UPDATE` and instead just insert the new data alongside the existing data. We could add a column that denotes whether or not the data is stale or new and there is actually a table engine that already implements this behavior very nicely, especially considering that it deletes the stale data automatically for you. Let's see how it works.
167+
Updating a column involves deleting an existing row and replacing it with new values. As you have already seen, this type of mutation in ClickHouse happens _eventually_ - during merges. If you have a lot of rows to update, it can actually be more efficient to avoid `ALTER TABLE..UPDATE` and instead just insert the new data alongside the existing data. We could add a column that denotes whether or not the data is stale or new... and there is actually a table engine that already implements this behavior very nicely, especially considering that it deletes the stale data automatically for you. Let's see how it works.
168168

169169
Suppose we track the number of views that a Hacker News comment has using an external system and every few hours, we push the data into ClickHouse. We want the old rows deleted and the new rows to represent the new state of each Hacker News comment. We can use a `CollapsingMergeTree` to implement this behavior.
170170

docs/guides/developer/ttl.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -203,7 +203,7 @@ FROM system.disks
203203
└─────────────┴────────────────┴──────────────┴──────────────┘
204204
```
205205

206-
3. Andlet's verify the volumes:
206+
3. And...let's verify the volumes:
207207

208208
```sql
209209
SELECT

docs/integrations/data-ingestion/data-formats/csv-tsv.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@ FORMAT CSVWithNames
217217

218218
### Saving exported data to a CSV file {#saving-exported-data-to-a-csv-file}
219219

220-
To save exported data to a file, we can use the [INTOOUTFILE](/sql-reference/statements/select/into-outfile.md) clause:
220+
To save exported data to a file, we can use the [INTO...OUTFILE](/sql-reference/statements/select/into-outfile.md) clause:
221221

222222
```sql
223223
SELECT *

0 commit comments

Comments
 (0)