Skip to content

Commit b56eccd

Browse files
committed
Merge branch 'main' of https://github.com/ClickHouse/clickhouse-docs into menu-improvement
2 parents bdb1ff1 + a7d1b45 commit b56eccd

File tree

151 files changed

+5116
-4640
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

151 files changed

+5116
-4640
lines changed

clickhouseapi.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ function generateDocusaurusMarkdown(spec, groupedEndpoints, prefix) {
5252

5353
markdownContent += `| Method | Path |\n`
5454
markdownContent += `| :----- | :--- |\n`
55-
markdownContent += `| ${method.toUpperCase()} | ${path} |\n\n`
55+
markdownContent += `| ${method.toUpperCase()} | \`${path}\` |\n\n`
5656

5757
markdownContent += `### Request\n\n`;
5858

copyClickhouseRepoDocs.sh

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,8 @@
1-
#! ./bin/bash
1+
#! /bin/bash
22

33
SCRIPT_NAME=$(basename "$0")
44

5+
rm -rf ClickHouse
56
echo "[$SCRIPT_NAME] Start tasks for copying docs from ClickHouse repo"
67

78
# Clone ClickHouse repo

docs/en/cloud/reference/changelog.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -957,7 +957,7 @@ Adds support for a subset of features in ClickHouse 23.1, for example:
957957
- New functions, including `age()`, `quantileInterpolatedWeighted()`, `quantilesInterpolatedWeighted()`
958958
- Ability to use structure from insertion table in `generateRandom` without arguments
959959
- Improved database creation and rename logic that allows the reuse of previous names
960-
- See the 23.1 release [webinar slides](https://presentations.clickhouse.com/release_23.1/#cover) and [23.1 release changelog](/docs/en/whats-new/changelog/index.md/#clickhouse-release-231) for more details
960+
- See the 23.1 release [webinar slides](https://presentations.clickhouse.com/release_23.1/#cover) and [23.1 release changelog](/docs/en/whats-new/changelog/index.md#clickhouse-release-231) for more details
961961

962962
### Integrations changes
963963
- [Kafka-Connect](/docs/en/integrations/data-ingestion/kafka/index.md): Added support for Amazon MSK

docs/en/data-compression/compression-modes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ From [facebook benchmarks](https://facebook.github.io/zstd/#benchmarks):
4141
| mode | byte | Compression mode |
4242
| compressed_data | binary | Block of compressed data |
4343

44-
![compression block diagram](../native-protocol/images/ch_compression_block.drawio.svg)
44+
![compression block diagram](./images/ch_compression_block.png)
4545

4646
Header is (raw_size + data_size + mode), raw size consists of len(header + compressed_data).
4747

10.6 KB
Loading

docs/en/data-modeling/backfilling.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -448,7 +448,7 @@ GROUP BY
448448

449449
Here, we create a Null table, `pypi_v2,` to receive the rows that will be used to build our materialized view. Note how we limit the schema to only the columns we need. Our materialized view performs an aggregation over rows inserted into this table (one block at a time), sending the results to our target table, `pypi_downloads_per_day`.
450450

451-
::note
451+
:::note
452452
We have used `pypi_downloads_per_day` as our target table here. For additional resiliency, users could create a duplicate table, `pypi_downloads_per_day_v2`, and use this as the target table of the view, as shown in previous examples. On completion of the insert, partitions in `pypi_downloads_per_day_v2` could, in turn, be moved to `pypi_downloads_per_day.` This would allow recovery in the case our insert fails due to memory issues or server interruptions i.e. we just truncate `pypi_downloads_per_day_v2`, tune settings, and retry.
453453
:::
454454

docs/en/guides/developer/understanding-query-execution-with-the-analyzer.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -63,12 +63,10 @@ Each node has corresponding children and the overall tree represents the overall
6363

6464
## Analyzer
6565

66-
<BetaBadge />
67-
68-
ClickHouse currently has two architectures for the Analyzer. You can use the old architecture by setting: `allow_experimental_analyzer=0`. If you want to use the new architecture, you should set `allow_experimental_analyzer=1`. We are going to describe only the new architecture here, given the old one is going to be deprecated once the new analyzer is generally available.
66+
ClickHouse currently has two architectures for the Analyzer. You can use the old architecture by setting: `enable_analyzer=0`. The new architecture is enabled by default. We are going to describe only the new architecture here, given the old one is going to be deprecated once the new analyzer is generally available.
6967

7068
:::note
71-
The new analyzer is in Beta. The new architecture should provide us with a better framework to improve ClickHouse's performance. However, given it is a fundamental component of the query processing steps, it also might have a negative impact on some queries. After moving to the new analyzer, you may see performance degradation, queries failing, or queries giving you an unexpected result. You can revert back to the old analyzer by changing the `allow_experimental_analyzer` setting at the query or user level. Please report any issues in GitHub.
69+
The new architecture should provide us with a better framework to improve ClickHouse's performance. However, given it is a fundamental component of the query processing steps, it also might have a negative impact on some queries and there are [known incompatibilities](/docs/en/operations/analyzer#known-incompatibilities). You can revert back to the old analyzer by changing the `enable_analyzer` setting at the query or user level.
7270
:::
7371

7472
The analyzer is an important step of the query execution. It takes an AST and transforms it into a query tree. The main benefit of a query tree over an AST is that a lot of the components will be resolved, like the storage for instance. We also know from which table to read, aliases are also resolved, and the tree knows the different data types used. With all these benefits, the analyzer can apply optimizations. The way these optimizations work is via “passes”. Every pass is going to look for different optimizations. You can see all the passes [here](https://github.com/ClickHouse/ClickHouse/blob/76578ebf92af3be917cd2e0e17fea2965716d958/src/Analyzer/QueryTreePassManager.cpp#L249), let’s see it in practice with our previous query:

docs/en/guides/inserting-data.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ With asynchronous inserts, data is inserted into a buffer first and then written
8989
<br />
9090

9191
<img src={require('./images/postgres-inserts.png').default}
92-
class="image"
92+
className="image"
9393
alt="NEEDS ALT"
9494
style={{width: '600px'}}
9595
/>

docs/en/guides/sre/user-management/index.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -200,10 +200,6 @@ This article shows the basics of defining SQL users and roles and applying those
200200
GRANT ALL ON *.* TO clickhouse_admin WITH GRANT OPTION;
201201
```
202202

203-
<Content />
204-
205-
206-
207203
## ALTER permissions
208204

209205
This article is intended to provide you with a better understanding of how to define permissions, and how permissions work when using `ALTER` statements for privileged users.

docs/en/integrations/data-ingestion/clickpipes/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ import PostgresSVG from "../../images/logos/postgresql.svg";
1818

1919
## Introduction
2020

21-
[ClickPipes](https://clickhouse.com/cloud/clickpipes) is a managed integration platform that makes ingesting data from a diverse set of sources as simple as clicking a few buttons. Designed for the most demanding workloads, ClickPipes's robust and scalable architecture ensures consistent performance and reliability. ClickPipes can be used for long-term streaming needs or one-time data loading job.
21+
[ClickPipes](/docs/en/integrations/clickpipes) is a managed integration platform that makes ingesting data from a diverse set of sources as simple as clicking a few buttons. Designed for the most demanding workloads, ClickPipes's robust and scalable architecture ensures consistent performance and reliability. ClickPipes can be used for long-term streaming needs or one-time data loading job.
2222

2323
![ClickPipes stack](./images/clickpipes_stack.png)
2424

@@ -64,7 +64,7 @@ Steps:
6464
![Assign a custom role](./images/cp_custom_role.png)
6565

6666
## Error reporting
67-
ClickPipes will create a table next to your destination table with the postfix `<destination_table_name>_clickpipes_error`. This table will contain any errors from the operations of your ClickPipe (network, connectivity, etc.) and also any data that don't conform to the schema. The error table has a [TTL](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-ttl) of 7 days.
67+
ClickPipes will create a table next to your destination table with the postfix `<destination_table_name>_clickpipes_error`. This table will contain any errors from the operations of your ClickPipe (network, connectivity, etc.) and also any data that don't conform to the schema. The error table has a [TTL](/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-ttl) of 7 days.
6868
If ClickPipes cannot connect to a data source or destination after 15min., ClickPipes instance stops and stores an appropriate message in the error table (providing the ClickHouse instance is available).
6969

7070
## F.A.Q
@@ -74,7 +74,7 @@ If ClickPipes cannot connect to a data source or destination after 15min., Click
7474

7575
- **Does ClickPipes support data transformation?**
7676

77-
Yes, ClickPipes supports basic data transformation by exposing the DDL creation. You can then apply more advanced transformations to the data as it is loaded into its destination table in a ClickHouse Cloud service leveraging ClickHouse's [materialized views feature](https://clickhouse.com/docs/en/guides/developer/cascading-materialized-views).
77+
Yes, ClickPipes supports basic data transformation by exposing the DDL creation. You can then apply more advanced transformations to the data as it is loaded into its destination table in a ClickHouse Cloud service leveraging ClickHouse's [materialized views feature](/docs/en/guides/developer/cascading-materialized-views).
7878

7979
- **Does using ClickPipes incur an additional cost?**
8080

0 commit comments

Comments
 (0)