Skip to content

Commit 93ab1a4

Browse files
authored
Merge pull request #3104 from Blargian/tag_knowledgebase_articles
Add script to check for KB tags and add tags to all KB articles
2 parents 8f9ca19 + 305d6e3 commit 93ab1a4

File tree

118 files changed

+896
-394
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

118 files changed

+896
-394
lines changed

.github/workflows/check-build.yml

Lines changed: 23 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -31,10 +31,30 @@ jobs:
3131
continue-on-error: true
3232
id: spellcheck
3333

34-
# Step 4: Fail the build if the script returns exit code 1
34+
# Step 4: Setup Python and dependencies for KB checker
35+
- name: Set up Python
36+
uses: actions/setup-python@v3
37+
with:
38+
python-version: '3.x'
39+
40+
# Step 5: Install Python dependencies
41+
- name: Install dependencies
42+
run: |
43+
python -m pip install --upgrade pip
44+
pip install -r 'scripts/knowledgebase-checker/requirements.txt'
45+
46+
# Step 5: Run knowledgebase article checker
47+
- name: Check KB
48+
run: |
49+
./scripts/knowledgebase-checker/knowledgebase_article_checker.py --kb-dir="knowledgebase"
50+
continue-on-error: true
51+
id: kbcheck
52+
53+
# Step 6: Fail the build if any script returns exit code 1
3554
- name: Check exit code
3655
run: |
37-
if [ ${{ steps.spellcheck.outcome }} == 'failure' ]; then
38-
echo "Spellcheck failed. See the logs for details."
56+
if [[ "${{ steps.spellcheck.outcome }}" == "failure" ]] || [[ "${{ steps.kbcheck.outcome }}" == "failure" ]]; then
57+
echo "Style check failed. See the logs for details."
3958
exit 1
4059
fi
60+

docusaurus.config.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2395,8 +2395,8 @@ const config = {
23952395
chHeader
23962396
],
23972397
customFields: {
2398+
blogSidebarLink: '/docs/knowledgebase',
23982399
galaxyApiEndpoint: process.env.NEXT_PUBLIC_GALAXY_API_ENDPOINT || 'http://localhost:3000',
2399-
24002400
secondaryNavItems: [
24012401
{
24022402
type: 'docSidebar',

knowledgebase/Insert_select_settings_tuning.md renamed to knowledgebase/Insert_select_settings_tuning.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,19 @@
11
---
2-
title: TOO MANY PARTS error during an INSERT...SELECT
2+
title: How do I solve TOO MANY PARTS error during an INSERT...SELECT?
33
description: "Resolve the TOO_MANY_PARTS error in ClickHouse during an `INSERT...SELECT` by tuning expert-level settings for larger blocks and increasing partition thresholds."
44
date: 2023-07-21
5+
tags: ['Settings', 'Errors and Exceptions']
56
---
67

7-
# TOO MANY PARTS error during an INSERT...SELECT
8+
{frontMatter.description}
9+
{/* truncate */}
810

911
## Question
1012

1113
When executing a `INSERT...SELECT` statement, I am getting too many parts (TOO_MANY_PARTS) error.
1214

1315
How can I solve this?
1416

15-
<!-- truncate -->
16-
1717
## Answer
1818

1919
Below are some of the settings to tune to avoid this error, this is expert level tuning of ClickHouse and these values should be set only after understanding the specifications of the ClickHouse cloud service or on-prem cluster where these will be used, so do not take these values as "one size fits all".

knowledgebase/ODBC-authentication-failed-error-using-PowerBI-CH-connector.md renamed to knowledgebase/ODBC-authentication-failed-error-using-PowerBI-CH-connector.mdx

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,16 +2,17 @@
22
title: ODBC authentication failed error when using the Power BI ClickHouse connector
33
description: "ODBC authentication failed error when using the Power BI ClickHouse connector"
44
date: 2024-07-10
5+
tags: ['Native Clients and Interfaces', 'Errors and Exceptions']
6+
keywords: ['ODBC', 'Power BI Connector', 'Authentication Failed']
57
---
68

7-
# ODBC authentication failed error when using the Power BI ClickHouse connector
9+
{frontMatter.description}
10+
{/* truncate */}
811

9-
### Question
12+
## Question
1013

1114
When trying to connect from PowerBI to ClickHouse using the connector, you receive a authentication error.
1215

13-
<!-- truncate -->
14-
1516
This error usually looks like the following:
1617

1718
```
@@ -26,9 +27,7 @@ ClickHouse is installed.
2627
```
2728
![Power BI Error](./images/powerbi_odbc_authentication_error.png)
2829

29-
30-
31-
### Answer
30+
## Answer
3231

3332
Check the password being used to see if the password contains a tilde `~`.
3433

knowledgebase/about-quotas-and-query-complexity.md renamed to knowledgebase/about-quotas-and-query-complexity.mdx

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,17 @@
11
---
22
date: 2023-10-25
33
title: About Quotas and Query complexity
4+
tags: ['Managing Cloud']
5+
keywords: ['Quotas', 'Query Complexity']
6+
description: 'Quotas and Query Complexity are powerful ways to limit and restrict what users can do in ClickHouse. This KB article shows examples on how to apply these two different approaches.'
47
---
58

6-
# About Quotas and Query complexity
9+
{frontMatter.description}
10+
{/* truncate */}
711

8-
[Quotas](https://clickhouse.com/docs/en/operations/quotas) and [query complexity](https://clickhouse.com/docs/en/operations/settings/query-complexity) are powerful ways to limit and restrict what users can do in ClickHouse.
12+
## About Quotas and Query complexity
913

10-
<!-- truncate -->
14+
[Quotas](https://clickhouse.com/docs/en/operations/quotas) and [query complexity](https://clickhouse.com/docs/en/operations/settings/query-complexity) are powerful ways to limit and restrict what users can do in ClickHouse.
1115

1216
Quotas do apply restrictions within the context of a time interval, while query complexity applies regardless of time intervals.
1317

knowledgebase/add-column.md renamed to knowledgebase/add-column.mdx

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,15 @@
11
---
22
title: Adding a column to a table
3-
description: Adding a new column to a table in ClickHouse
3+
description: In this guide, we'll learn how to add a column to an existing table.
44
date: 2024-12-18
5+
tags: ['Data Modelling']
6+
keywords: ['Add Column']
57
---
68

7-
In this guide, we'll learn how to add a column to an existing table.
9+
{frontMatter.description}
10+
{/* truncate */}
811

9-
<!-- truncate -->
12+
## Adding a Column to a Table
1013

1114
We'll be using clickhouse-local:
1215

knowledgebase/alter-user-settings-exception.md renamed to knowledgebase/alter-user-settings-exception.mdx

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,16 @@
22
title: Alter User Settings Exception
33
description: Handing the an exception thrown when altering user settings
44
date: 2023-08-26
5+
tags: ['Settings', 'Errors and Exceptions']
6+
keywords: ['Exception', 'User Settings']
57
---
68

7-
# DB::Exception: Cannot update user `default` in users.xml because this storage is readonly. (ACCESS_STORAGE_READONLY)
9+
{frontMatter.description}
10+
{/* truncate */}
811

9-
When you try to alter a user's settings, you may encounter the above exception.
12+
## DB::Exception: Cannot update user `default` in users.xml because this storage is readonly. (ACCESS_STORAGE_READONLY)
1013

11-
<!-- truncate -->
14+
When you try to alter a user's settings, you may encounter the above exception.
1215

1316
Here are a few options to troubleshoot this error:
1417

knowledgebase/are_materialized_views_inserted_asynchronously.md renamed to knowledgebase/are_materialized_views_inserted_asynchronously.mdx

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,17 @@
11
---
22
title: Are Materialized Views inserted synchronously?
3-
description: Insert behavior of materialized views
3+
description: This KB article explores whether Materialized Views are inserted synchronously
44
date: 2023-03-01
5+
tags: ['Data Modelling']
6+
keywords: ['Materialized View']
57
---
68

7-
# Are Materialized Views inserted synchronously?
9+
{frontMatter.description}
10+
{/* truncate */}
811

9-
**Question:** When a source table has new rows inserted into it, those new rows are also sent to all of the materialized views of that source table. Are inserts into Materialized Views performed synchronously, meaning that once the insert is acknowledged successfully from the server to the client, it means that all Materialized Views have been fully updated and available for queries?
12+
## Are Materialized Views inserted synchronously?
1013

11-
<!-- truncate -->
14+
**Question:** When a source table has new rows inserted into it, those new rows are also sent to all of the materialized views of that source table. Are inserts into Materialized Views performed synchronously, meaning that once the insert is acknowledged successfully from the server to the client, it means that all Materialized Views have been fully updated and available for queries?
1215

1316
**Answer:**
1417

knowledgebase/async_vs_optimize_read_in_order.md renamed to knowledgebase/async_vs_optimize_read_in_order.mdx

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,17 @@
11
---
22
title: Synchronous data reading
3-
description: "The new setting allow_asynchronous_read_from_io_pool_for_merge_tree allows the number of reading threads (streams) to be higher than the number of threads in the rest of the query execution pipeline."
3+
description: "The new setting `allow_asynchronous_read_from_io_pool_for_merge_tree` allows the number of reading threads (streams) to be higher than the number of threads in the rest of the query execution pipeline."
44
date: 2023-03-01
5+
tags: ['Settings', 'Performance and Optimizations']
6+
keywords: ['Synchronous', 'Asynchronous', 'Data Reading']
57
---
68

7-
# Synchronous data reading
9+
{frontMatter.description}
10+
{/* truncate */}
811

9-
The new setting allow_asynchronous_read_from_io_pool_for_merge_tree allows the number of reading threads (streams) to be higher than the number of threads in the rest of the query execution pipeline.
12+
## Synchronous data reading
1013

11-
<!-- truncate -->
14+
The new setting allow_asynchronous_read_from_io_pool_for_merge_tree allows the number of reading threads (streams) to be higher than the number of threads in the rest of the query execution pipeline.
1215

1316
Normally the [max_threads](https://clickhouse.com/docs/en/operations/settings/settings/#settings-max_threads) setting [controls](https://clickhouse.com/company/events/query-performance-introspection) the number of parallel reading threads and parallel query processing threads:
1417

knowledgebase/aws-privatelink-setup-for-clickpipes.md renamed to knowledgebase/aws-privatelink-setup-for-clickpipes.mdx

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,16 @@
22
title: AWS PrivateLink setup to expose private RDS for ClickPipes
33
description: Setup steps to expose a private RDS via AWS PrivateLink to ClickPipes.
44
date: 2024-11-27
5+
tags: ['Security and Authentication', 'Managing Cloud']
6+
keywords: ['AWS PrivateLink', 'Private RDS', 'ClickPipes']
57
---
68

7-
# AWS PrivateLink setup to expose private RDS for ClickPipes
9+
{frontMatter.description}
10+
{/* truncate */}
811

9-
Setup steps to expose a private RDS via AWS PrivateLink to ClickPipes.
12+
## AWS PrivateLink setup to expose private RDS for ClickPipes
1013

11-
<!-- truncate -->
14+
Setup steps to expose a private RDS via AWS PrivateLink to ClickPipes.
1215

1316
## Requirements
1417

0 commit comments

Comments
 (0)