Skip to content

Commit 551dd6b

Browse files
authored
master: add v8.5.5 info (pingcap#22293)
1 parent 0924b30 commit 551dd6b

8 files changed

+19
-19
lines changed

br/br-checkpoint-restore.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ Cross-major-version checkpoint recovery is not recommended. For clusters where `
6969

7070
> **Note:**
7171
>
72-
> Starting from v9.0.0, BR stores checkpoint data in the downstream cluster by default. You can specify an external storage for checkpoint data using the `--checkpoint-storage` parameter.
72+
> Starting from v8.5.5 and v9.0.0, BR stores checkpoint data in the downstream cluster by default. You can specify an external storage for checkpoint data using the `--checkpoint-storage` parameter.
7373
7474
Checkpoint restore operations are divided into two parts: snapshot restore and PITR restore.
7575

@@ -93,13 +93,13 @@ Note that before entering the log restore phase during the initial restore, `br`
9393

9494
> **Note:**
9595
>
96-
> To ensure compatibility with clusters of earlier versions, starting from v9.0.0, if the system table `mysql.tidb_pitr_id_map` does not exist in the restore cluster, the `pitr_id_map` data will be written to the log backup directory. The file name is `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}`.
96+
> To ensure compatibility with clusters of earlier versions, starting from v8.5.5 and v9.0.0, if the system table `mysql.tidb_pitr_id_map` does not exist in the restore cluster, the `pitr_id_map` data will be written to the log backup directory. The file name is `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}`.
9797
9898
## Implementation details: store checkpoint data in the external storage
9999

100100
> **Note:**
101101
>
102-
> Starting from v9.0.0, BR stores checkpoint data in the downstream cluster by default. You can specify an external storage for checkpoint data using the `--checkpoint-storage` parameter. For example:
102+
> Starting from v8.5.5 and v9.0.0, BR stores checkpoint data in the downstream cluster by default. You can specify an external storage for checkpoint data using the `--checkpoint-storage` parameter. For example:
103103
>
104104
> ```shell
105105
> ./br restore full -s "s3://backup-bucket/backup-prefix" --checkpoint-storage "s3://temp-bucket/checkpoints"
@@ -159,4 +159,4 @@ Note that before entering the log restore phase during the initial restore, `br`
159159

160160
> **Note:**
161161
>
162-
> To ensure compatibility with clusters of earlier versions, starting from v9.0.0, if the system table `mysql.tidb_pitr_id_map` does not exist in the restore cluster and the `--checkpoint-storage` parameter is not specified, the `pitr_id_map` data will be written to the log backup directory. The file name is `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}`.
162+
> To ensure compatibility with clusters of earlier versions, starting from v8.5.5 and v9.0.0, if the system table `mysql.tidb_pitr_id_map` does not exist in the restore cluster and the `--checkpoint-storage` parameter is not specified, the `pitr_id_map` data will be written to the log backup directory. The file name is `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}`.

br/br-compact-log-backup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Traditional log backups store write operations in a highly unstructured manner,
1515
- **Write amplification**: all writes must be compacted from L0 to the bottommost level by level.
1616
- **Dependency on full backups**: frequent full backups are required to control the amount of recovery data, which can impact application operations.
1717

18-
Starting from v9.0.0, the compact log backup feature provides offline compaction capabilities, converting unstructured log backup data into structured SST files. This results in the following improvements:
18+
Starting from v8.5.5 and v9.0.0, the compact log backup feature provides offline compaction capabilities, converting unstructured log backup data into structured SST files. This results in the following improvements:
1919

2020
- SST files can be quickly imported into the cluster, **improving recovery performance**.
2121
- Redundant data is removed during compaction, **reducing storage space consumption**.

br/br-pitr-manual.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -505,7 +505,7 @@ tiup br restore point --pd="${PD_IP}:2379"
505505
506506
### Restore data using filters
507507
508-
Starting from TiDB v9.0.0, you can use filters during PITR to restore specific databases or tables, enabling more fine-grained control over the data to be restored.
508+
Starting from TiDB v8.5.5 and v9.0.0, you can use filters during PITR to restore specific databases or tables, enabling more fine-grained control over the data to be restored.
509509
510510
The filter patterns follow the same [table filtering syntax](/table-filter.md) as other BR operations:
511511
@@ -557,7 +557,7 @@ tiup br restore point --pd="${PD_IP}:2379" \
557557
558558
### Concurrent restore operations
559559
560-
Starting from TiDB v9.0.0, you can run multiple PITR restore tasks concurrently. This feature allows you to restore different datasets in parallel, improving efficiency for large-scale restore scenarios.
560+
Starting from TiDB v8.5.5 and v9.0.0, you can run multiple PITR restore tasks concurrently. This feature allows you to restore different datasets in parallel, improving efficiency for large-scale restore scenarios.
561561
562562
Usage example for concurrent restores:
563563
@@ -586,7 +586,7 @@ tiup br restore point --pd="${PD_IP}:2379" \
586586
587587
### Compatibility between ongoing log backup and snapshot restore
588588
589-
Starting from v9.0.0, when a log backup task is running, if all of the following conditions are met, you can still perform snapshot restore (`br restore [full|database|table]`) and allow the restored data to be properly recorded by the ongoing log backup (hereinafter referred to as "log backup"):
589+
Starting from v8.5.5 and v9.0.0, when a log backup task is running, if all of the following conditions are met, you can still perform snapshot restore (`br restore [full|database|table]`) and allow the restored data to be properly recorded by the ongoing log backup (hereinafter referred to as "log backup"):
590590
591591
- The node performing backup and restore operations has the following necessary permissions:
592592
- Read access to the external storage containing the backup source, for snapshot restore
@@ -604,11 +604,11 @@ If any of the above conditions are not met, you can restore the data by followin
604604
605605
> **Note:**
606606
>
607-
> When restoring a log backup that contains records of snapshot (full) restore data, you must use BR v9.0.0 or later. Otherwise, restoring the recorded full restore data might fail.
607+
> When restoring a log backup that contains records of snapshot (full) restore data, you must use BR v8.5.5 or later. Otherwise, restoring the recorded full restore data might fail.
608608
609609
### Compatibility between ongoing log backup and PITR operations
610610
611-
Starting from TiDB v9.0.0, you can perform PITR operations while a log backup task is running by default. The system automatically handles compatibility between these operations.
611+
Starting from TiDB v8.5.5 and v9.0.0, you can perform PITR operations while a log backup task is running by default. The system automatically handles compatibility between these operations.
612612
613613
#### Important limitation for PITR with ongoing log backup
614614

br/br-snapshot-guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -153,7 +153,7 @@ When you perform a snapshot backup, BR backs up system tables as tables with the
153153
- Starting from BR v5.1.0, when you back up snapshots, BR automatically backs up the **system tables** in the `mysql` schema, but does not restore these system tables by default.
154154
- Starting from v6.2.0, BR lets you specify `--with-sys-table` to restore **data in some system tables**.
155155
- Starting from v7.6.0, BR enables `--with-sys-table` by default, which means that BR restores **data in some system tables** by default.
156-
- Starting from v9.0.0, BR lets you specify `--fast-load-sys-tables` to restore system tables physically. This approach uses the `RENAME TABLE` DDL statement to atomically swap the system tables in the `__TiDB_BR_Temporary_mysql` database with the system tables in the `mysql` database. Unlike the logical restoration of system tables using the `REPLACE INTO` SQL statement, physical restoration completely overwrites the existing data in the system tables.
156+
- Starting from v8.5.5 and v9.0.0, BR lets you specify `--fast-load-sys-tables` to restore system tables physically. This approach uses the `RENAME TABLE` DDL statement to atomically swap the system tables in the `__TiDB_BR_Temporary_mysql` database with the system tables in the `mysql` database. Unlike the logical restoration of system tables using the `REPLACE INTO` SQL statement, physical restoration completely overwrites the existing data in the system tables.
157157

158158
**BR can restore data in the following system tables:**
159159

br/br-snapshot-manual.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -129,11 +129,11 @@ tiup br restore full \
129129

130130
> **Note:**
131131
>
132-
> Starting from v9.0.0, when the `--load-stats` parameter is set to `false`, BR no longer writes statistics for the restored tables to the `mysql.stats_meta` table. After the restore is complete, you can manually execute the [`ANALYZE TABLE`](/sql-statements/sql-statement-analyze-table.md) SQL statement to update the relevant statistics.
132+
> Starting from v8.5.5 and v9.0.0, when the `--load-stats` parameter is set to `false`, BR no longer writes statistics for the restored tables to the `mysql.stats_meta` table. After the restore is complete, you can manually execute the [`ANALYZE TABLE`](/sql-statements/sql-statement-analyze-table.md) SQL statement to update the relevant statistics.
133133
134134
When the backup and restore feature backs up data, it stores statistics in JSON format within the `backupmeta` file. When restoring data, it loads statistics in JSON format into the cluster. For more information, see [LOAD STATS](/sql-statements/sql-statement-load-stats.md).
135135

136-
Starting from 9.0.0, BR introduces the `--fast-load-sys-tables` parameter, which is enabled by default. When restoring data to a new cluster using the `br` command-line tool, and the IDs of tables and partitions between the upstream and downstream clusters can be reused (otherwise, BR will automatically fall back to logically load statistics), enabling `--fast-load-sys-tables` lets BR to first restore the statistics-related system tables to the temporary system database `__TiDB_BR_Temporary_mysql`, and then atomically swap these tables with the corresponding tables in the `mysql` database using the `RENAME TABLE` statement.
136+
Starting from v8.5.5 and v9.0.0, BR introduces the `--fast-load-sys-tables` parameter, which is enabled by default. When restoring data to a new cluster using the `br` command-line tool, and the IDs of tables and partitions between the upstream and downstream clusters can be reused (otherwise, BR will automatically fall back to logically load statistics), enabling `--fast-load-sys-tables` lets BR to first restore the statistics-related system tables to the temporary system database `__TiDB_BR_Temporary_mysql`, and then atomically swap these tables with the corresponding tables in the `mysql` database using the `RENAME TABLE` statement.
137137

138138
The following is an example:
139139

@@ -194,7 +194,7 @@ Download&Ingest SST <-----------------------------------------------------------
194194
Restore Pipeline <-------------------------/...............................................> 17.12%
195195
```
196196

197-
Starting from TiDB v9.0.0, BR lets you specify `--fast-load-sys-tables` to restore statistics physically in a new cluster:
197+
Starting from TiDB v8.5.5 and v9.0.0, BR lets you specify `--fast-load-sys-tables` to restore statistics physically in a new cluster:
198198

199199
```shell
200200
tiup br restore full \

configure-store-limit.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ tiup ctl:v<CLUSTER_VERSION> pd store limit all 5 add-peer // All stores
5454
tiup ctl:v<CLUSTER_VERSION> pd store limit all 5 remove-peer // All stores can at most delete 5 peers per minute.
5555
```
5656

57-
Starting from v8.5.5 and v9.0.0, you can set the speed limit for removing-peers operations for all stores of a specific storage engine type, as shown in the following examples:
57+
Starting from v8.5.5 and v9.0.0, you can set the speed limit for removing-peer operations for all stores of a specific storage engine type, as shown in the following examples:
5858

5959
```bash
6060
tiup ctl:v<CLUSTER_VERSION> pd store limit all engine tikv 5 remove-peer // All TiKV stores can at most remove 5 peers per minute.

system-variables.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1775,7 +1775,7 @@ mysql> SELECT job_info FROM mysql.analyze_jobs ORDER BY end_time DESC LIMIT 1;
17751775
- If `tidb_ddl_enable_fast_reorg` is set to `OFF`, `ADD INDEX` is executed as a transaction. If there are many update operations such as `UPDATE` and `REPLACE` in the target columns during the `ADD INDEX` execution, a larger batch size indicates a larger probability of transaction conflicts. In this case, it is recommended that you set the batch size to a smaller value. The minimum value is 32.
17761776
- If the transaction conflict does not exist, or if `tidb_ddl_enable_fast_reorg` is set to `ON`, you can set the batch size to a large value. This makes data backfilling faster but also increases the write pressure on TiKV. For a proper batch size, you also need to refer to the value of `tidb_ddl_reorg_worker_cnt`. See [Interaction Test on Online Workloads and `ADD INDEX` Operations](https://docs.pingcap.com/tidb/dev/online-workloads-and-add-index-operations) for reference.
17771777
- Starting from v8.3.0, this parameter is supported at the SESSION level. Modifying the parameter at the GLOBAL level will not impact currently running DDL statements. It will only apply to DDLs submitted in new sessions.
1778-
- Starting from v8.5.0, you can modify this parameter for a running DDL job by executing `ADMIN ALTER DDL JOBS <job_id> BATCH_SIZE = <new_batch_size>;`.
1778+
- Starting from v8.5.0, you can modify this parameter for a running DDL job by executing `ADMIN ALTER DDL JOBS <job_id> BATCH_SIZE = <new_batch_size>;`. For more information, see [`ADMIN ALTER DDL JOBS`](/sql-statements/sql-statement-admin-alter-ddl.md).
17791779

17801780
### tidb_ddl_reorg_priority
17811781

@@ -1851,7 +1851,7 @@ Assume that you have a cluster with 4 TiDB nodes and multiple TiKV nodes. In thi
18511851
- Unit: Threads
18521852
- This variable is used to set the concurrency of the DDL operation in the `re-organize` phase.
18531853
- Starting from v8.3.0, this parameter is supported at the SESSION level. Modifying the parameter at the GLOBAL level will not impact currently running DDL statements. It will only apply to DDLs submitted in new sessions.
1854-
- Starting from v8.5.0, you can modify this parameter for a running DDL job by executing `ADMIN ALTER DDL JOBS <job_id> THREAD = <new_thread_count>;`.
1854+
- Starting from v8.5.0, you can modify this parameter for a running DDL job by executing `ADMIN ALTER DDL JOBS <job_id> THREAD = <new_thread_count>;`. For more information, see [`ADMIN ALTER DDL JOBS`](/sql-statements/sql-statement-admin-alter-ddl.md).
18551855
18561856
### `tidb_enable_fast_create_table` <span class="version-mark">New in v8.0.0</span>
18571857
@@ -6407,7 +6407,7 @@ For details, see [Identify Slow Queries](/identify-slow-queries.md).
64076407
> - `PARALLEL` and `PARALLEL-FAST` modes are incompatible with [`tidb_tso_client_batch_max_wait_time`](#tidb_tso_client_batch_max_wait_time-new-in-v530) and [`tidb_enable_tso_follower_proxy`](#tidb_enable_tso_follower_proxy-new-in-v530). If either [`tidb_tso_client_batch_max_wait_time`](#tidb_tso_client_batch_max_wait_time-new-in-v530) is set to a non-zero value or [`tidb_enable_tso_follower_proxy`](#tidb_enable_tso_follower_proxy-new-in-v530) is enabled, configuring `tidb_tso_client_rpc_mode` does not take effect, and TiDB always works in `DEFAULT` mode.
64086408
> - `PARALLEL` and `PARALLEL-FAST` modes are designed to reduce the average time for retrieving TS in TiDB. In situations with significant latency fluctuations, such as long-tail latency or latency spikes, these two modes might not provide any remarkable performance improvements.
64096409
6410-
### tidb_cb_pd_metadata_error_rate_threshold_ratio <span class="version-mark">New in v9.0.0</span>
6410+
### tidb_cb_pd_metadata_error_rate_threshold_ratio <span class="version-mark">New in v8.5.5 and v9.0.0</span>
64116411
64126412
- Scope: GLOBAL
64136413
- Persists to cluster: Yes

tidb-configuration-file.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -640,7 +640,7 @@ Configuration items related to performance.
640640
### `enable-async-batch-get` <span class="version-mark">New in v8.5.5 and v9.0.0</span>
641641

642642
+ Controls whether TiDB uses asynchronous mode to execute the Batch Get operator. Using asynchronous mode can reduce goroutine overhead and provide better performance. Generally, there is no need to modify this configuration item.
643-
+ Default value: `true`
643+
+ Default value: `true` for v9.0.0 and later versions. In v8.5.5, the default value is `false`.
644644

645645
## opentracing
646646

0 commit comments

Comments
 (0)