Skip to content

Commit c93bc75

Browse files
authored
Merge branch 'main' into fix_vale_errors
2 parents dd2fba2 + c28c0d7 commit c93bc75

File tree

78 files changed

+2008
-219
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

78 files changed

+2008
-219
lines changed

docs/best-practices/minimize_optimize_joins.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ For a full guide on denormalizing data in ClickHouse see [here](/data-modeling/d
2626

2727
## When JOINs are required {#when-joins-are-required}
2828

29-
When JOINs are required, ensure youre using **at least version 24.12 and preferably the latest version**, as JOIN performance continues to improve with each new release. As of ClickHouse 24.12, the query planner now automatically places the smaller table on the right side of the join for optimal performance - a task that previously had to be done manually. Even more enhancements are coming soon, including more aggressive filter pushdown and automatic re-ordering of multiple joins.
29+
When JOINs are required, ensure you're using **at least version 24.12 and preferably the latest version**, as JOIN performance continues to improve with each new release. As of ClickHouse 24.12, the query planner now automatically places the smaller table on the right side of the join for optimal performance - a task that previously had to be done manually. Even more enhancements are coming soon, including more aggressive filter pushdown and automatic re-ordering of multiple joins.
3030

3131
Follow these best practices to improve JOIN performance:
3232

docs/cloud/changelogs/changelog-25_1-25_4.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -274,7 +274,7 @@ sidebar_label: 'v25.4'
274274
* Don't fail silently if user executing `SYSTEM DROP REPLICA` doesn't have enough permissions. [#75377](https://github.com/ClickHouse/ClickHouse/pull/75377) ([Bharat Nallan](https://github.com/bharatnc)).
275275
* Add a ProfileEvent about the number of times any of system logs has failed to flush. [#75466](https://github.com/ClickHouse/ClickHouse/pull/75466) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
276276
* Add check and logging for decrypting and decompressing. [#75471](https://github.com/ClickHouse/ClickHouse/pull/75471) ([Vitaly Baranov](https://github.com/vitlibar)).
277-
* Added support for the micro sign (U+00B5) in the `parseTimeDelta` function. Now both the micro sign (U+00B5) and the Greek letter mu (U+03BC) are recognized as valid representations for microseconds, aligning ClickHouse's behavior with Gos implementation ([see time.go](https://github.com/golang/go/blob/ad7b46ee4ac1cee5095d64b01e8cf7fcda8bee5e/src/time/time.go#L983C19-L983C20) and [time/format.go](https://github.com/golang/go/blob/ad7b46ee4ac1cee5095d64b01e8cf7fcda8bee5e/src/time/format.go#L1608-L1609)). [#75472](https://github.com/ClickHouse/ClickHouse/pull/75472) ([Vitaly Orlov](https://github.com/orloffv)).
277+
* Added support for the micro sign (U+00B5) in the `parseTimeDelta` function. Now both the micro sign (U+00B5) and the Greek letter mu (U+03BC) are recognized as valid representations for microseconds, aligning ClickHouse's behavior with Go's implementation ([see time.go](https://github.com/golang/go/blob/ad7b46ee4ac1cee5095d64b01e8cf7fcda8bee5e/src/time/time.go#L983C19-L983C20) and [time/format.go](https://github.com/golang/go/blob/ad7b46ee4ac1cee5095d64b01e8cf7fcda8bee5e/src/time/format.go#L1608-L1609)). [#75472](https://github.com/ClickHouse/ClickHouse/pull/75472) ([Vitaly Orlov](https://github.com/orloffv)).
278278
* Replace server setting (`send_settings_to_client`) with client setting (`apply_settings_from_server`) that controls whether client-side code (e.g. parsing INSERT data and formatting query output) should use settings from server's `users.xml` and user profile. Otherwise only settings from client command line, session, and the query are used. Note that this only applies to native client (not e.g. HTTP), and doesn't apply to most of query processing (which happens on the server). [#75478](https://github.com/ClickHouse/ClickHouse/pull/75478) ([Michael Kolupaev](https://github.com/al13n321)).
279279
* Keeper improvement: disable digest calculation when committing to in-memory storage for better performance. It can be enabled with `keeper_server.digest_enabled_on_commit` config. Digest is still calculated when preprocessing requests. [#75490](https://github.com/ClickHouse/ClickHouse/pull/75490) ([Antonio Andelic](https://github.com/antonio2368)).
280280
* Push down filter expression from JOIN ON when possible. [#75536](https://github.com/ClickHouse/ClickHouse/pull/75536) ([Vladimir Cherkasov](https://github.com/vdimir)).
@@ -621,7 +621,7 @@ sidebar_label: 'v25.4'
621621
* The universal installation script will propose installation even on macOS. [#74339](https://github.com/ClickHouse/ClickHouse/pull/74339) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
622622
* Fix build when kerberos is not enabled. [#74771](https://github.com/ClickHouse/ClickHouse/pull/74771) ([flynn](https://github.com/ucasfl)).
623623
* Update to embedded LLVM 19. [#75148](https://github.com/ClickHouse/ClickHouse/pull/75148) ([Konstantin Bogdanov](https://github.com/thevar1able)).
624-
* *Potentially breaking*: Improvement to set even more restrictive defaults. The current defaults are already secure. The user has to specify an option to publish ports explicitly. But when the `default` user doesnt have a password set by `CLICKHOUSE_PASSWORD` and/or a username changed by `CLICKHOUSE_USER` environment variables, it should be available only from the local system as an additional level of protection. [#75259](https://github.com/ClickHouse/ClickHouse/pull/75259) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
624+
* *Potentially breaking*: Improvement to set even more restrictive defaults. The current defaults are already secure. The user has to specify an option to publish ports explicitly. But when the `default` user doesn't have a password set by `CLICKHOUSE_PASSWORD` and/or a username changed by `CLICKHOUSE_USER` environment variables, it should be available only from the local system as an additional level of protection. [#75259](https://github.com/ClickHouse/ClickHouse/pull/75259) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
625625
* Integration tests have a 1-hour timeout for single batch of parallel tests running. When this timeout is reached `pytest` is killed without some logs. Internal pytest timeout is set to 55 minutes to print results from a session and not trigger external timeout signal. Closes [#75532](https://github.com/ClickHouse/ClickHouse/issues/75532). [#75533](https://github.com/ClickHouse/ClickHouse/pull/75533) ([Ilya Yatsishin](https://github.com/qoega)).
626626
* Make all clickhouse-server related actions a function, and execute them only when launching the default binary in `entrypoint.sh`. A long-postponed improvement was suggested in [#50724](https://github.com/ClickHouse/ClickHouse/issues/50724). Added switch `--users` to `clickhouse-extract-from-config` to get values from the `users.xml`. [#75643](https://github.com/ClickHouse/ClickHouse/pull/75643) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
627627
* For stress tests if server did not exit while we collected stacktraces via gdb additional wait time is added to make `Possible deadlock on shutdown (see gdb.log)` detection less noisy. It will only add delay for cases when test did not finish successfully. [#75668](https://github.com/ClickHouse/ClickHouse/pull/75668) ([Ilya Yatsishin](https://github.com/qoega)).

docs/cloud/manage/backups/export-backups-to-own-cloud-account.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,12 +12,16 @@ import EnterprisePlanFeatureBadge from '@theme/badges/EnterprisePlanFeatureBadge
1212
ClickHouse Cloud supports taking backups to your own cloud service provider (CSP) account (AWS S3, Google Cloud Storage, or Azure Blob Storage).
1313
For details of how ClickHouse Cloud backups work, including "full" vs. "incremental" backups, see the [backups](overview.md) docs.
1414

15-
Here we show examples of how to take full and incremental backups to AWS, GCP, Azure object storage as well as how to restore from the backups.
15+
Here we show examples of how to take full and incremental backups to AWS, GCP, Azure object storage as well as how to restore from the backups. The BACKUP commands listed below are run within the original service. The RESTORE commands are run from a new service where the backup should be restored.
1616

1717
:::note
1818
Users should be aware that any usage where backups are being exported to a different region in the same cloud provider, or to another cloud provider (in the same or different region) will incur [data transfer](../network-data-transfer.mdx) charges.
1919
:::
2020

21+
:::note
22+
Backup / Restore into your own bucket for services utilizing [TDE](https://clickhouse.com/docs/cloud/security/cmek#transparent-data-encryption-tde) is currently not supported.
23+
:::
24+
2125
## Requirements {#requirements}
2226

2327
You will need the following details to export/restore backups to your own CSP storage bucket.
@@ -59,6 +63,11 @@ You will need the following details to export/restore backups to your own CSP st
5963
<hr/>
6064
# Backup / Restore
6165
66+
:::note:::
67+
1. For restoring the backup from your own bucket into a new service, you will need to update the trust policy of your backups storage bucket to allow access from the new service.
68+
2. The Backup / Restore commands need to be run from the database command line. For restore to a new service, you will first need to create the service and then run the command.
69+
:::
70+
6271
## Backup / Restore to AWS S3 Bucket {#backup--restore-to-aws-s3-bucket}
6372
6473
### Take a DB Backup {#take-a-db-backup}

docs/cloud/manage/billing.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -417,7 +417,7 @@ This dimension covers the compute units provisioned per service just for Postgre
417417
ClickPipes. Compute is shared across all Postgres pipes within a service. **It
418418
is provisioned when the first Postgres pipe is created and deallocated when no
419419
Postgres CDC pipes remain**. The amount of compute provisioned depends on your
420-
organizations tier:
420+
organization's tier:
421421

422422
| Tier | Cost |
423423
|------------------------------|-----------------------------------------------|
@@ -426,7 +426,7 @@ organization’s tier:
426426

427427
#### Example {#example}
428428

429-
Lets say your service is in Scale tier and has the following setup:
429+
Let's say your service is in Scale tier and has the following setup:
430430

431431
- 2 Postgres ClickPipes running continuous replication
432432
- Each pipe ingests 500 GB of data changes (CDC) per month
@@ -540,7 +540,7 @@ Postgres CDC ClickPipes pricing begins appearing on monthly bills starting
540540
**September 1st, 2025**, for all customers—both existing and new. Until then,
541541
usage is free. Customers have a **3-month window** starting from **May 29**
542542
(the GA announcement date) to review and optimize their usage if needed, although
543-
we expect most wont need to make any changes.
543+
we expect most won't need to make any changes.
544544

545545
</details>
546546

@@ -550,7 +550,7 @@ we expect most won’t need to make any changes.
550550

551551
No data ingestion charges apply while a pipe is paused, since no data is moved.
552552
However, compute charges still apply—either 0.5 or 1 compute unit—based on your
553-
organizations tier. This is a fixed service-level cost and applies across all
553+
organization's tier. This is a fixed service-level cost and applies across all
554554
pipes within that service.
555555

556556
</details>

docs/cloud/manage/troubleshooting-billing-issues.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ import Image from '@theme/IdealImage';
1212

1313
## Fixing non-working payment details {#fixing-non-working-payment-details}
1414

15-
Use of ClickHouse Cloud requires an active, working credit card. For 30 days after trial expiration or after your last successful payment, your services will continue to run. However, if we are unable to charge a valid credit card, cloud console functionality will be limited for your organization.
15+
Use of ClickHouse Cloud requires an active, working credit card. For 30 days after trial expiration or after your last successful payment, your services will continue to run. However, if we are unable to charge a valid credit card, cloud console functionality for your org will be restricted, including scaling (up to 120 GiB per replica) and starting of your services if stopped.
1616

1717
**If a valid credit card is not added 30 days after trial expiration or your last successful payment, your data will be deleted.**
1818

docs/cloud/reference/changelog.md

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,10 +30,15 @@ import query_endpoints from '@site/static/images/cloud/reference/may-17-query-en
3030
import dashboards from '@site/static/images/cloud/reference/may-30-dashboards.png';
3131

3232
In addition to this ClickHouse Cloud changelog, please see the [Cloud Compatibility](/cloud/reference/cloud-compatibility.md) page.
33+
## June 13, 2025 {#june-13-2025}
34+
35+
- We're excited to announce that ClickHouse Cloud Dashboards are now generally available. Dashboards allow users to visualize queries on dashboards, interact with data via filters and query parameters, and manage sharing.
36+
37+
- API key IP filters: we are introducing an additional layer of protection for your interactions with ClickHouse Cloud. When generating an API key, you may setup an IP allow list to limit where the API key may be used. Please refer to the [documentation](https://clickhouse.com/docs/cloud/security/setting-ip-filters) for details.
3338

3439
## May 30, 2025 {#may-30-2025}
3540

36-
- Were excited to announce general availability of **ClickPipes for Postgres CDC**
41+
- We're excited to announce general availability of **ClickPipes for Postgres CDC**
3742
in ClickHouse Cloud. With just a few clicks, you can now replicate your Postgres
3843
databases and unlock blazing-fast, real-time analytics. The connector delivers
3944
faster data synchronization, latency as low as a few seconds, automatic schema changes,
@@ -64,7 +69,7 @@ In addition to this ClickHouse Cloud changelog, please see the [Cloud Compatibil
6469
* Memory & CPU: Graphs for `CGroupMemoryTotal` (Allocated Memory), `CGroupMaxCPU` (allocated CPU),
6570
`MemoryResident` (memory used), and `ProfileEvent_OSCPUVirtualTimeMicroseconds` (CPU used)
6671
* Data Transfer: Graphs showing data ingress and egress from ClickHouse Cloud. Learn more [here](/cloud/manage/network-data-transfer).
67-
- Were excited to announce the launch of our new ClickHouse Cloud Prometheus/Grafana mix-in,
72+
- We're excited to announce the launch of our new ClickHouse Cloud Prometheus/Grafana mix-in,
6873
built to simplify monitoring for your ClickHouse Cloud services.
6974
This mix-in uses our Prometheus-compatible API endpoint to seamlessly integrate
7075
ClickHouse metrics into your existing Prometheus and Grafana setup. It includes

docs/data-modeling/projections.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -326,7 +326,7 @@ paid prices is streaming 2.17 million rows. When we directly used a second table
326326
optimized for this query, only 81.92 thousand rows were streamed from disk.
327327

328328
The reason for the difference is that currently, the `optimize_read_in_order`
329-
optimization mentioned above isnt supported for projections.
329+
optimization mentioned above isn't supported for projections.
330330

331331
We inspect the `system.query_log` table to see that ClickHouse
332332
automatically used the two projections for the two queries above (see the

docs/guides/best-practices/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,10 +15,10 @@ which covers the main concepts required to improve performance.
1515
|---------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
1616
| [Query Optimization Guide](/optimize/query-optimization) | A good place to start for query optimization, this simple guide describes common scenarios of how to use different performance and optimization techniques to improve query performance. |
1717
| [Primary Indexes Advanced Guide](/guides/best-practices/sparse-primary-indexes) | A deep dive into ClickHouse indexing including how it differs from other DB systems, how ClickHouse builds and uses a table's spare primary index and what some of the best practices are for indexing in ClickHouse. |
18-
| [Query Parallelism](/optimize/query-parallelism) | Explains how ClickHouse parallelizes query execution using processing lanes and the max_threads setting. Covers how data is distributed across lanes, how max_threads is applied, when it isnt fully used, and how to inspect execution with tools like EXPLAIN and trace logs. |
18+
| [Query Parallelism](/optimize/query-parallelism) | Explains how ClickHouse parallelizes query execution using processing lanes and the max_threads setting. Covers how data is distributed across lanes, how max_threads is applied, when it isn't fully used, and how to inspect execution with tools like EXPLAIN and trace logs. |
1919
| [Partitioning Key](/optimize/partitioning-key) | Delves into ClickHouse partition key optimization. Explains how choosing the right partition key can significantly improve query performance by allowing ClickHouse to quickly locate relevant data segments. Covers best practices for selecting efficient partition keys and potential pitfalls to avoid. |
2020
| [Data Skipping Indexes](/optimize/skipping-indexes) | Explains data skipping indexes as a way to optimize performance. |
21-
| [PREWHERE Optimization](/optimize/prewhere) | Explains how PREWHERE reduces I/O by avoiding reading unnecessary column data. Shows how its applied automatically, how the filtering order is chosen, and how to monitor it using EXPLAIN and logs. |
21+
| [PREWHERE Optimization](/optimize/prewhere) | Explains how PREWHERE reduces I/O by avoiding reading unnecessary column data. Shows how it's applied automatically, how the filtering order is chosen, and how to monitor it using EXPLAIN and logs. |
2222
| [Bulk Inserts](/optimize/bulk-inserts) | Explains the benefits of using bulk inserts in ClickHouse. |
2323
| [Asynchronous Inserts](/optimize/asynchronous-inserts) | Focuses on ClickHouse's asynchronous inserts feature. It likely explains how asynchronous inserts work (batching data on the server for efficient insertion) and their benefits (improved performance by offloading insert processing). It might also cover enabling asynchronous inserts and considerations for using them effectively in your ClickHouse environment. |
2424
| [Avoid Mutations](/optimize/avoid-mutations) | Discusses the importance of avoiding mutations (updates and deletes) in ClickHouse. It recommends using append-only inserts for optimal performance and suggests alternative approaches for handling data changes. |

0 commit comments

Comments
 (0)