Skip to content

Commit 3680224

Browse files
committed
borders
1 parent 045ff1f commit 3680224

File tree

7 files changed

+15
-27
lines changed

7 files changed

+15
-27
lines changed

docs/cloud/bestpractices/asyncinserts.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ By default, ClickHouse is writing data synchronously.
1818
Each insert sent to ClickHouse causes ClickHouse to immediately create a part containing the data from the insert.
1919
This is the default behavior when the async_insert setting is set to its default value of 0:
2020

21-
<Image img={asyncInsert01} size="lg" alt="Asynchronous insert process - default synchronous inserts" background="white"/>
21+
<Image img={asyncInsert01} size="md" alt="Asynchronous insert process - default synchronous inserts" background="white"/>
2222

2323
By setting async_insert to 1, ClickHouse first stores the incoming inserts into an in-memory buffer before flushing them regularly to disk.
2424

@@ -36,9 +36,9 @@ With the [wait_for_async_insert](/operations/settings/settings.md/#wait_for_asyn
3636

3737
The following two diagrams illustrate the two settings for async_insert and wait_for_async_insert:
3838

39-
<Image img={asyncInsert02} size="lg" alt="Asynchronous insert process - async_insert=1, wait_for_async_insert=1" background="white"/>
39+
<Image img={asyncInsert02} size="md" alt="Asynchronous insert process - async_insert=1, wait_for_async_insert=1" background="white"/>
4040

41-
<Image img={asyncInsert03} size="lg" alt="Asynchronous insert process - async_insert=1, wait_for_async_insert=0" background="white"/>
41+
<Image img={asyncInsert03} size="md" alt="Asynchronous insert process - async_insert=1, wait_for_async_insert=0" background="white"/>
4242

4343
### Enabling asynchronous inserts {#enabling-asynchronous-inserts}
4444

docs/cloud/bestpractices/partitioningkey.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,18 +9,16 @@ import Image from '@theme/IdealImage';
99
import partitioning01 from '@site/static/images/cloud/bestpractices/partitioning-01.png';
1010
import partitioning02 from '@site/static/images/cloud/bestpractices/partitioning-02.png';
1111

12-
# Choose a Low Cardinality Partitioning Key
13-
1412
When you send an insert statement (that should contain many rows - see [section above](/optimize/bulk-inserts)) to a table in ClickHouse Cloud, and that
1513
table is not using a [partitioning key](/engines/table-engines/mergetree-family/custom-partitioning-key.md) then all row data from that insert is written into a new part on storage:
1614

17-
<Image img={partitioning01} size="lg" alt="Insert without partitioning key - one part created" background="white"/>
15+
<Image img={partitioning01} size="md" alt="Insert without partitioning key - one part created" background="white"/>
1816

1917
However, when you send an insert statement to a table in ClickHouse Cloud, and that table has a partitioning key, then ClickHouse:
2018
- checks the partitioning key values of the rows contained in the insert
2119
- creates one new part on storage per distinct partitioning key value
2220
- places the rows in the corresponding parts by partitioning key value
2321

24-
<Image img={partitioning02} size="lg" alt="Insert with partitioning key - multiple parts created based on partitioning key values" background="white"/>
22+
<Image img={partitioning02} size="md" alt="Insert with partitioning key - multiple parts created based on partitioning key values" background="white"/>
2523

2624
Therefore, to minimize the number of write requests to the ClickHouse Cloud object storage, use a low cardinality partitioning key or avoid using any partitioning key for your table.

docs/guides/best-practices/skipping-indexes.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ Instead of processing 100 million rows of 800 megabytes, ClickHouse has only rea
9999
In a more visual form, this is how the 4096 rows with a `my_value` of 125 were read and selected, and how the following rows
100100
were skipped without reading from disk:
101101

102-
<Image img={simple_skip} size="lg" alt="Simple Skip"/>
102+
<Image img={simple_skip} size="md" alt="Simple Skip"/>
103103

104104
Users can access detailed information about skip index usage by enabling the trace when executing queries. From
105105
clickhouse-client, set the `send_logs_level`:
@@ -176,7 +176,7 @@ Skip indexes are not intuitive, especially for users accustomed to secondary row
176176

177177
Consider the following data distribution:
178178

179-
<Image img={bad_skip} size="lg" alt="Bad Skip"/>
179+
<Image img={bad_skip} size="md" alt="Bad Skip"/>
180180

181181
Assume the primary/order by key is `timestamp`, and there is an index on `visitor_id`. Consider the following query:
182182

docs/guides/developer/cascading-materialized-views.md

Lines changed: 3 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -9,17 +9,7 @@ keywords: ['materialized view', 'aggregation']
99

1010
This example demonstrates how to create a Materialized View, and then how to cascade a second Materialized View on to the first. In this page, you will see how to do it, many of the possibilities, and the limitations. Different use cases can be answered by creating a Materialized view using a second Materialized view as the source.
1111

12-
<div style={{width:'640px', height: '360px'}}>
13-
<iframe src="//www.youtube.com/embed/QDAJTKZT8y4"
14-
width="640"
15-
height="360"
16-
frameborder="0"
17-
allow="autoplay;
18-
fullscreen;
19-
picture-in-picture"
20-
allowfullscreen>
21-
</iframe>
22-
</div>
12+
<iframe width="1024" height="576" src="https://www.youtube.com/embed/QDAJTKZT8y4?si=1KqPNHHfaKfxtPat" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
2313

2414
<br />
2515

@@ -301,13 +291,13 @@ Create two materialized views pointing to the same `Target` table. You don't nee
301291
```sql
302292
CREATE MATERIALIZED VIEW analytics.daily_impressions_mv
303293
TO analytics.daily_overview
304-
AS
294+
AS
305295
SELECT
306296
toDate(event_time) AS on_date,
307297
domain_name,
308298
count() AS impressions,
309299
0 clicks ---<<<--- if you omit this, it will be the same 0
310-
FROM
300+
FROM
311301
analytics.impressions
312302
GROUP BY
313303
toDate(event_time) AS on_date,

docs/integrations/data-ingestion/dbms/dynamodb/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -32,14 +32,14 @@ Data will be ingested into a `ReplacingMergeTree`. This table engine is commonly
3232
First, you will want to enable a Kinesis stream on your DynamoDB table to capture changes in real-time. We want to do this before we create the snapshot to avoid missing any data.
3333
Find the AWS guide located [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/kds.html).
3434

35-
<Image img={dynamodb_kinesis_stream} size="lg" alt="DynamoDB Kinesis Stream" />
35+
<Image img={dynamodb_kinesis_stream} size="lg" alt="DynamoDB Kinesis Stream" border/>
3636

3737
## 2. Create the snapshot {#2-create-the-snapshot}
3838

3939
Next, we will create a snapshot of the DynamoDB table. This can be achieved through an AWS export to S3. Find the AWS guide located [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/S3DataExport.HowItWorks.html).
4040
**You will want to do a "Full export" in the DynamoDB JSON format.**
4141

42-
<Image img={dynamodb_s3_export} size="lg" alt="DynamoDB S3 Export"/>
42+
<Image img={dynamodb_s3_export} size="md" alt="DynamoDB S3 Export" border/>
4343

4444
## 3. Load the snapshot into ClickHouse {#3-load-the-snapshot-into-clickhouse}
4545

@@ -129,7 +129,7 @@ Now we can set up the Kinesis ClickPipe to capture real-time changes from the Ki
129129
- `ApproximateCreationDateTime`: `version`
130130
- Map other fields to the appropriate destination columns as shown below
131131

132-
<Image img={dynamodb_map_columns} size="lg" alt="DynamoDB Map Columns"/>
132+
<Image img={dynamodb_map_columns} size="md" alt="DynamoDB Map Columns" border/>
133133

134134
## 5. Cleanup (optional) {#5-cleanup-optional}
135135

docs/managing-data/core-concepts/academic_overview.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ export function Anchor(props) {
2929

3030
This is the web version of our [VLDB 2024 scientific paper](https://www.vldb.org/pvldb/vol17/p3731-schulze.pdf). We also [blogged](https://clickhouse.com/blog/first-clickhouse-research-paper-vldb-lightning-fast-analytics-for-everyone) about its background and journey, and recommend watching the VLDB 2024 presentation by ClickHouse CTO and creator, Alexey Milovidov:
3131

32-
<iframe width="768" height="432" src="https://www.youtube.com/embed/7QXKBKDOkJE?si=5uFerjqPSXQWqDkF" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
32+
<iframe width="1024" height="576" src="https://www.youtube.com/embed/7QXKBKDOkJE?si=5uFerjqPSXQWqDkF" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
3333

3434
## ABSTRACT {#abstract}
3535

docs/managing-data/core-concepts/shards.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,4 +114,4 @@ For more details beyond this high-level introduction to table shards and replica
114114

115115
We also highly recommend this tutorial video for a deeper dive into ClickHouse shards and replicas:
116116

117-
<iframe width="768" height="432" src="https://www.youtube.com/embed/vBjCJtw_Ei0?si=WqopTrnti6usCMRs" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
117+
<iframe width="1024" height="576" src="https://www.youtube.com/embed/vBjCJtw_Ei0?si=WqopTrnti6usCMRs" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

0 commit comments

Comments
 (0)