Skip to content

Commit d642444

Browse files
committed
Remove related content
1 parent fb83b7e commit d642444

File tree

14 files changed

+13
-72
lines changed

14 files changed

+13
-72
lines changed

docs/cloud/reference/shared-merge-tree.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,3 @@ Most of the time, you should not be using `select_sequential_consistency` or `SY
119119
2. If you write to one replica and read from another, you can use `SYSTEM SYNC REPLICA LIGHTWEIGHT` to force the replica to fetch the metadata from ClickHouse-Keeper.
120120

121121
3. Use `select_sequential_consistency` as a setting as part of your query.
122-
123-
## Related Content {#related-content}
124-
125-
- [ClickHouse Cloud boosts performance with SharedMergeTree and Lightweight Updates](https://clickhouse.com/blog/clickhouse-cloud-boosts-performance-with-sharedmergetree-and-lightweight-updates)

docs/faq/use-cases/time-series.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,3 @@ First of all, there are **[specialized codecs](../../sql-reference/statements/cr
1717
Second, time-series queries often hit only recent data, like one day or one week old. It makes sense to use servers that have both fast NVMe/SSD drives and high-capacity HDD drives. ClickHouse [TTL](/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-ttl) feature allows to configure keeping fresh hot data on fast drives and gradually move it to slower drives as it ages. Rollup or removal of even older data is also possible if your requirements demand it.
1818

1919
Even though it's against ClickHouse philosophy of storing and processing raw data, you can use [materialized views](../../sql-reference/statements/create/view.md) to fit into even tighter latency or costs requirements.
20-
21-
## Related Content {#related-content}
22-
23-
- Blog: [Working with time series data in ClickHouse](https://clickhouse.com/blog/working-with-time-series-data-and-functions-ClickHouse)

docs/guides/best-practices/sparse-primary-indexes.md

Lines changed: 1 addition & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ sidebar_position: 1
44
description: 'In this guide we are going to do a deep dive into ClickHouse indexing.'
55
title: 'A Practical Introduction to Primary Indexes in ClickHouse'
66
slug: /guides/best-practices/sparse-primary-indexes
7+
show_related_blogs: true
78
---
89

910
import sparsePrimaryIndexes01 from '@site/static/images/guides/best-practices/sparse-primary-indexes-01.png';
@@ -156,10 +157,6 @@ ClickHouse client's result output indicates that ClickHouse executed a full tabl
156157

157158
To make this (way) more efficient and (much) faster, we need to use a table with a appropriate primary key. This will allow ClickHouse to automatically (based on the primary key's column(s)) create a sparse primary index which can then be used to significantly speed up the execution of our example query.
158159

159-
### Related content {#related-content}
160-
- Blog: [Super charging your ClickHouse queries](https://clickhouse.com/blog/clickhouse-faster-queries-with-projections-and-primary-indexes)
161-
162-
163160
## ClickHouse Index Design {#clickhouse-index-design}
164161

165162
### An index design for massive data scales {#an-index-design-for-massive-data-scales}
@@ -1473,11 +1470,6 @@ Therefore the `cl` values are most likely in random order and therefore have a b
14731470

14741471
For both the efficient filtering on secondary key columns in queries and the compression ratio of a table's column data files it is beneficial to order the columns in a primary key by their cardinality in ascending order.
14751472

1476-
1477-
### Related content {#related-content-1}
1478-
- Blog: [Super charging your ClickHouse queries](https://clickhouse.com/blog/clickhouse-faster-queries-with-projections-and-primary-indexes)
1479-
1480-
14811473
## Identifying single rows efficiently {#identifying-single-rows-efficiently}
14821474

14831475
Although in general it is [not](/knowledgebase/key-value) the best use case for ClickHouse,

docs/guides/developer/ttl.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ sidebar_position: 2
55
keywords: ['ttl', 'time to live', 'clickhouse', 'old', 'data']
66
description: 'TTL (time-to-live) refers to the capability of having rows or columns moved, deleted, or rolled up after a certain interval of time has passed.'
77
title: 'Manage Data with TTL (Time-to-live)'
8+
show_related_blogs: true
89
---
910

1011
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
@@ -258,8 +259,3 @@ The response will look like:
258259
│ all_2_2_0 │ hot_disk │
259260
└─────────────┴───────────┘
260261
```
261-
262-
263-
## Related Content {#related-content}
264-
265-
- Blog & Webinar: [Using TTL to Manage Data Lifecycles in ClickHouse](https://clickhouse.com/blog/using-ttl-to-manage-data-lifecycles-in-clickhouse)

docs/integrations/data-ingestion/data-formats/intro.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ sidebar_position: 1
55
keywords: ['clickhouse', 'CSV', 'TSV', 'Parquet', 'clickhouse-client', 'clickhouse-local']
66
title: 'Importing from various data formats to ClickHouse'
77
description: 'Page describing how to import various data formats into ClickHouse'
8+
show_related_blogs: true
89
---
910

1011
# Importing from various data formats to ClickHouse
@@ -32,8 +33,3 @@ Handle common Apache formats such as Parquet and Arrow.
3233
Need a SQL dump to import into MySQL or Postgresql? Look no further.
3334

3435
If you are looking to connect a BI tool like Grafana, Tableau and others, check out the [Visualize category](../../data-visualization/index.md) of the docs.
35-
36-
37-
## Related Content {#related-content}
38-
39-
- Blog: [An Introduction to Data Formats in ClickHouse](https://clickhouse.com/blog/data-formats-clickhouse-csv-tsv-parquet-native)

docs/integrations/data-ingestion/dbms/postgresql/connecting-to-postgresql.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ slug: /integrations/postgresql/connecting-to-postgresql
33
title: 'Connecting to PostgreSQL'
44
keywords: ['clickhouse', 'postgres', 'postgresql', 'connect', 'integrate', 'table', 'engine']
55
description: 'Page describing the various ways to connect PostgreSQL to ClickHouse'
6+
show_related_blogs: true
67
---
78

89
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
@@ -341,8 +342,3 @@ This integration guide focused on a simple example on how to replicate a databas
341342
:::info
342343
For more features available for advanced options, please see the [reference documentation](/engines/database-engines/materialized-postgresql).
343344
:::
344-
345-
346-
## Related content {#related-content}
347-
- Blog: [ClickHouse and PostgreSQL - a match made in data heaven - part 1](https://clickhouse.com/blog/migrating-data-between-clickhouse-postgres)
348-
- Blog: [ClickHouse and PostgreSQL - a Match Made in Data Heaven - part 2](https://clickhouse.com/blog/migrating-data-between-clickhouse-postgres-part-2)

docs/integrations/data-ingestion/etl-tools/dbt/index.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1101,7 +1101,3 @@ Additional configuration for the plugin is described [here](https://github.com/s
11011101
## Fivetran {#fivetran}
11021102
11031103
The `dbt-clickhouse` connector is also available for use in [Fivetran transformations](https://fivetran.com/docs/transformations/dbt), allowing seamless integration and transformation capabilities directly within the Fivetran platform using `dbt`.
1104-
1105-
## Related Content {#related-content}
1106-
1107-
- Blog & Webinar: [ClickHouse and dbt - A Gift from the Community](https://clickhouse.com/blog/clickhouse-dbt-project-introduction-and-webinar)

docs/integrations/data-ingestion/etl-tools/vector-to-clickhouse.md

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ sidebar_position: 220
44
slug: /integrations/vector
55
description: 'How to tail a log file into ClickHouse using Vector'
66
title: 'Integrating Vector with ClickHouse'
7+
show_related_blogs: true
78
---
89

910
import Image from '@theme/IdealImage';
@@ -185,9 +186,3 @@ Having the logs in ClickHouse is great, but storing each event as a single strin
185186

186187

187188
**Summary:** By using Vector, which only required a simple install and quick configuration, we can send logs from an Nginx server to a table in ClickHouse. By using a clever materialized view, we can parse those logs into columns for easier analytics.
188-
189-
## Related Content {#related-content}
190-
191-
- Blog: [Building an Observability Solution with ClickHouse in 2023 - Part 1 - Logs](https://clickhouse.com/blog/storing-log-data-in-clickhouse-fluent-bit-vector-open-telemetry)
192-
- Blog: [Sending Nginx logs to ClickHouse with Fluent Bit ](https://clickhouse.com/blog/nginx-logs-to-clickhouse-fluent-bit)
193-
- Blog: [Sending Kubernetes logs To ClickHouse with Fluent Bit](https://clickhouse.com/blog/kubernetes-logs-to-clickhouse-fluent-bit)

docs/integrations/data-ingestion/insert-local-files.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ sidebar_position: 2
44
title: 'Insert Local Files'
55
slug: /integrations/data-ingestion/insert-local-files
66
description: 'Learn about Insert Local Files'
7+
show_related_blogs: true
78
---
89

910
# Insert Local Files
@@ -114,8 +115,3 @@ cat comments.tsv | clickhouse-client \
114115
```
115116

116117
Visit the [docs page on `clickhouse-client`](/interfaces/cli) for details on how to install `clickhouse-client` on your local operating system.
117-
118-
## Related Content {#related-content}
119-
120-
- Blog: [Getting Data Into ClickHouse - Part 1](https://clickhouse.com/blog/getting-data-into-clickhouse-part-1)
121-
- Blog: [Exploring massive, real-world data sets: 100+ Years of Weather Records in ClickHouse](https://clickhouse.com/blog/real-world-data-noaa-climate-data)

docs/integrations/data-ingestion/redshift/index.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,8 +31,6 @@ import Image from '@theme/IdealImage';
3131
</iframe>
3232
</div>
3333

34-
- Blog: [Optimizing Analytical Workloads: Comparing Redshift vs ClickHouse](https://clickhouse.com/blog/redshift-vs-clickhouse-comparison)
35-
3634
## Introduction {#introduction}
3735

3836
[Amazon Redshift](https://aws.amazon.com/redshift/) is a popular cloud data warehousing solution that is part of the Amazon Web Services offerings. This guide presents different approaches to migrating data from a Redshift instance to ClickHouse. We will cover three options:

0 commit comments

Comments
 (0)