Skip to content

Commit 6df5fa3

Browse files
committed
fix links
1 parent 29bec13 commit 6df5fa3

File tree

8 files changed

+11
-11
lines changed

8 files changed

+11
-11
lines changed

docs/en/integrations/data-ingestion/clickpipes/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ import PostgresSVG from "../../images/logos/postgresql.svg";
1818

1919
## Introduction
2020

21-
[ClickPipes](https://clickhouse.com/cloud/clickpipes) is a managed integration platform that makes ingesting data from a diverse set of sources as simple as clicking a few buttons. Designed for the most demanding workloads, ClickPipes's robust and scalable architecture ensures consistent performance and reliability. ClickPipes can be used for long-term streaming needs or one-time data loading job.
21+
[ClickPipes](/docs/en/integrations/clickpipes) is a managed integration platform that makes ingesting data from a diverse set of sources as simple as clicking a few buttons. Designed for the most demanding workloads, ClickPipes's robust and scalable architecture ensures consistent performance and reliability. ClickPipes can be used for long-term streaming needs or one-time data loading job.
2222

2323
![ClickPipes stack](./images/clickpipes_stack.png)
2424

@@ -64,7 +64,7 @@ Steps:
6464
![Assign a custom role](./images/cp_custom_role.png)
6565

6666
## Error reporting
67-
ClickPipes will create a table next to your destination table with the postfix `<destination_table_name>_clickpipes_error`. This table will contain any errors from the operations of your ClickPipe (network, connectivity, etc.) and also any data that don't conform to the schema. The error table has a [TTL](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-ttl) of 7 days.
67+
ClickPipes will create a table next to your destination table with the postfix `<destination_table_name>_clickpipes_error`. This table will contain any errors from the operations of your ClickPipe (network, connectivity, etc.) and also any data that don't conform to the schema. The error table has a [TTL](/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-ttl) of 7 days.
6868
If ClickPipes cannot connect to a data source or destination after 15min., ClickPipes instance stops and stores an appropriate message in the error table (providing the ClickHouse instance is available).
6969

7070
## F.A.Q
@@ -74,7 +74,7 @@ If ClickPipes cannot connect to a data source or destination after 15min., Click
7474

7575
- **Does ClickPipes support data transformation?**
7676

77-
Yes, ClickPipes supports basic data transformation by exposing the DDL creation. You can then apply more advanced transformations to the data as it is loaded into its destination table in a ClickHouse Cloud service leveraging ClickHouse's [materialized views feature](https://clickhouse.com/docs/en/guides/developer/cascading-materialized-views).
77+
Yes, ClickPipes supports basic data transformation by exposing the DDL creation. You can then apply more advanced transformations to the data as it is loaded into its destination table in a ClickHouse Cloud service leveraging ClickHouse's [materialized views feature](/docs/en/guides/developer/cascading-materialized-views).
7878

7979
- **Does using ClickPipes incur an additional cost?**
8080

docs/en/integrations/data-ingestion/data-formats/csv-tsv.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ FROM INFILE 'data_small.csv'
4040
FORMAT CSV
4141
```
4242

43-
Here, we use the `FORMAT CSV` clause so ClickHouse understands the file format. We can also load data directly from URLs using [url()](/docs/en/sql-reference/table-functions/url.md/) function or from S3 files using [s3()](/docs/en/sql-reference/table-functions/s3.md/) function.
43+
Here, we use the `FORMAT CSV` clause so ClickHouse understands the file format. We can also load data directly from URLs using [url()](/docs/en/sql-reference/table-functions/url.md) function or from S3 files using [s3()](/docs/en/sql-reference/table-functions/s3.md) function.
4444

4545
:::tip
4646
We can skip explicit format setting for `file()` and `INFILE`/`OUTFILE`.

docs/en/integrations/data-ingestion/data-formats/json/formats.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -273,7 +273,7 @@ Note that `JSONAsString` works perfectly fine in cases we have JSON object-per-l
273273

274274
## Schema for nested objects
275275

276-
In cases when we're dealing with [nested JSON objects](../assets/list-nested.json), we can additionally define schema and use complex types ([`Array`](/docs/en/sql-reference/data-types/array.md/), [`Object Data Type`](/en/sql-reference/data-types/object-data-type) or [`Tuple`](/docs/en/sql-reference/data-types/tuple.md/)) to load data:
276+
In cases when we're dealing with [nested JSON objects](../assets/list-nested.json), we can additionally define schema and use complex types ([`Array`](/docs/en/sql-reference/data-types/array.md), [`Object Data Type`](/en/sql-reference/data-types/object-data-type) or [`Tuple`](/docs/en/sql-reference/data-types/tuple.md)) to load data:
277277

278278
```sql
279279
SELECT *

docs/en/integrations/data-ingestion/data-formats/json/schema.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ keywords: [json, clickhouse, inserting, loading, formats, schema]
77

88
# Designing your schema
99

10-
While [schema inference](/docs/en/integrations/data-formats/JSON/inference) can be used to establish an initial schema for JSON data and query JSON data files in place, e.g., in S3, users should aim to establish an optimized versioned schema for their data. We discuss the options for modeling JSON structures below.
10+
While [schema inference](/docs/en/integrations/data-formats/json/inference) can be used to establish an initial schema for JSON data and query JSON data files in place, e.g., in S3, users should aim to establish an optimized versioned schema for their data. We discuss the options for modeling JSON structures below.
1111

1212
## Extract where possible
1313

docs/en/integrations/data-ingestion/data-formats/templates-regex.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -126,7 +126,7 @@ FORMAT Template SETTINGS format_template_resultset = 'output.results',
126126
```
127127

128128
### Exporting to HTML files
129-
Template-based results can also be exported to files using an [`INTO OUTFILE`](/docs/en/sql-reference/statements/select/into-outfile.md/) clause. Let's generate HTML files based on given [resultset](assets/html.results) and [row](assets/html.row) formats:
129+
Template-based results can also be exported to files using an [`INTO OUTFILE`](/docs/en/sql-reference/statements/select/into-outfile.md) clause. Let's generate HTML files based on given [resultset](assets/html.results) and [row](assets/html.row) formats:
130130

131131
```sql
132132
SELECT

docs/en/integrations/data-ingestion/dbms/postgresql/data-type-mappings.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,4 +39,4 @@ The following table shows the equivalent ClickHouse data types for Postgres.
3939
| JSON* | [String](/en/sql-reference/data-types/string), [Variant](/en/sql-reference/data-types/variant), [Nested](/en/sql-reference/data-types/nested-data-structures/nested#nestedname1-type1-name2-type2-), [Tuple](/en/sql-reference/data-types/tuple) |
4040
| JSONB | [String](/en/sql-reference/data-types/string) |
4141

42-
*\* Production support for JSON in ClickHouse is under development. Currently users can either map JSON as String, and use [JSON functions](/en/sql-reference/functions/json-functions), or map the JSON directly to [Tuples](/en/sql-reference/data-types/tuple) and [Nested](/en/sql-reference/data-types/nested-data-structures/nested) if the structure is predictable. Read more about JSON [here](/en/integrations/data-formats/json#handle-as-structured-data).*
42+
*\* Production support for JSON in ClickHouse is under development. Currently users can either map JSON as String, and use [JSON functions](/en/sql-reference/functions/json-functions), or map the JSON directly to [Tuples](/en/sql-reference/data-types/tuple) and [Nested](/en/sql-reference/data-types/nested-data-structures/nested) if the structure is predictable. Read more about JSON [here](/en/integrations/data-formats/json/overview).*

docs/en/integrations/language-clients/java/client-v1.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -214,7 +214,7 @@ ClickHouseClient client = ClickHouseClient.builder()
214214
.build();
215215
```
216216

217-
See the [compression documentation](/en/native-protocol/compression) to learn more about different compression options.
217+
See the [compression documentation](/docs/en/data-compression/compression-modes) to learn more about different compression options.
218218

219219
### Multiple queries
220220

docs/en/integrations/migration/rockset.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ Rockset and ClickHouse both support loading data from a variety of sources.
4747
In Rockset, you create a data source and then create a _collection_ based on that data source.
4848
There are fully managed integrations for event streaming platforms, OLTP databases, and cloud bucket storage.
4949

50-
In ClickHouse Cloud, the equivalent of fully managed integrations is [ClickPipes](/en/integrations/ClickPipes).
50+
In ClickHouse Cloud, the equivalent of fully managed integrations is [ClickPipes](/docs/en/integrations/clickpipes).
5151
ClickPipes supports continuously loading data from event streaming platforms and cloud bucket storage.
5252
ClickPipes loads data into _tables_.
5353

@@ -102,7 +102,7 @@ There are multiple ways to work with JSON in ClickHouse:
102102
* JSON extract at query time
103103
* JSON extract at insert time
104104

105-
To understand the best approach for your user case, see [our JSON documentation](/docs/en/integrations/data-formats/json).
105+
To understand the best approach for your user case, see [our JSON documentation](/docs/en/integrations/data-formats/json/overview).
106106

107107
In addition, ClickHouse will soon have [a Semistructured column data type](https://github.com/ClickHouse/ClickHouse/issues/54864).
108108
This new type should give users the flexibility Rockset's JSON type offers.

0 commit comments

Comments
 (0)