You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: br/br-checkpoint-restore.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -69,7 +69,7 @@ Cross-major-version checkpoint recovery is not recommended. For clusters where `
69
69
70
70
> **Note:**
71
71
>
72
-
> Starting from v9.0.0, BR stores checkpoint data in the downstream cluster by default. You can specify an external storage for checkpoint data using the `--checkpoint-storage` parameter.
72
+
> Starting from v8.5.5 and v9.0.0, BR stores checkpoint data in the downstream cluster by default. You can specify an external storage for checkpoint data using the `--checkpoint-storage` parameter.
73
73
74
74
Checkpoint restore operations are divided into two parts: snapshot restore and PITR restore.
75
75
@@ -93,13 +93,13 @@ Note that before entering the log restore phase during the initial restore, `br`
93
93
94
94
> **Note:**
95
95
>
96
-
> To ensure compatibility with clusters of earlier versions, starting from v9.0.0, if the system table `mysql.tidb_pitr_id_map` does not exist in the restore cluster, the `pitr_id_map` data will be written to the log backup directory. The file name is `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}`.
96
+
> To ensure compatibility with clusters of earlier versions, starting from v8.5.5 and v9.0.0, if the system table `mysql.tidb_pitr_id_map` does not exist in the restore cluster, the `pitr_id_map` data will be written to the log backup directory. The file name is `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}`.
97
97
98
98
## Implementation details: store checkpoint data in the external storage
99
99
100
100
> **Note:**
101
101
>
102
-
> Starting from v9.0.0, BR stores checkpoint data in the downstream cluster by default. You can specify an external storage for checkpoint data using the `--checkpoint-storage` parameter. For example:
102
+
> Starting from v8.5.5 and v9.0.0, BR stores checkpoint data in the downstream cluster by default. You can specify an external storage for checkpoint data using the `--checkpoint-storage` parameter. For example:
103
103
>
104
104
> ```shell
105
105
> ./br restore full -s "s3://backup-bucket/backup-prefix" --checkpoint-storage "s3://temp-bucket/checkpoints"
@@ -159,4 +159,4 @@ Note that before entering the log restore phase during the initial restore, `br`
159
159
160
160
> **Note:**
161
161
>
162
-
> To ensure compatibility with clusters of earlier versions, starting from v9.0.0, if the system table `mysql.tidb_pitr_id_map` does not exist in the restore cluster and the `--checkpoint-storage` parameter is not specified, the `pitr_id_map` data will be written to the log backup directory. The file name is `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}`.
162
+
> To ensure compatibility with clusters of earlier versions, starting from v8.5.5 and v9.0.0, if the system table `mysql.tidb_pitr_id_map` does not exist in the restore cluster and the `--checkpoint-storage` parameter is not specified, the `pitr_id_map` data will be written to the log backup directory. The file name is `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}`.
Copy file name to clipboardExpand all lines: br/br-compact-log-backup.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ Traditional log backups store write operations in a highly unstructured manner,
15
15
-**Write amplification**: all writes must be compacted from L0 to the bottommost level by level.
16
16
-**Dependency on full backups**: frequent full backups are required to control the amount of recovery data, which can impact application operations.
17
17
18
-
Starting from v9.0.0, the compact log backup feature provides offline compaction capabilities, converting unstructured log backup data into structured SST files. This results in the following improvements:
18
+
Starting from v8.5.5 and v9.0.0, the compact log backup feature provides offline compaction capabilities, converting unstructured log backup data into structured SST files. This results in the following improvements:
19
19
20
20
- SST files can be quickly imported into the cluster, **improving recovery performance**.
21
21
- Redundant data is removed during compaction, **reducing storage space consumption**.
Copy file name to clipboardExpand all lines: br/br-pitr-manual.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -505,7 +505,7 @@ tiup br restore point --pd="${PD_IP}:2379"
505
505
506
506
### Restore data using filters
507
507
508
-
Starting from TiDB v9.0.0, you can use filters during PITR to restore specific databases or tables, enabling more fine-grained control over the data to be restored.
508
+
Starting from TiDB v8.5.5 and v9.0.0, you can use filters during PITR to restore specific databases or tables, enabling more fine-grained control over the data to be restored.
509
509
510
510
The filter patterns follow the same [table filtering syntax](/table-filter.md) as other BR operations:
511
511
@@ -557,7 +557,7 @@ tiup br restore point --pd="${PD_IP}:2379" \
557
557
558
558
### Concurrent restore operations
559
559
560
-
Starting from TiDB v9.0.0, you can run multiple PITR restore tasks concurrently. This feature allows you to restore different datasets in parallel, improving efficiency for large-scale restore scenarios.
560
+
Starting from TiDB v8.5.5 and v9.0.0, you can run multiple PITR restore tasks concurrently. This feature allows you to restore different datasets in parallel, improving efficiency for large-scale restore scenarios.
561
561
562
562
Usage example for concurrent restores:
563
563
@@ -586,7 +586,7 @@ tiup br restore point --pd="${PD_IP}:2379" \
586
586
587
587
### Compatibility between ongoing log backup and snapshot restore
588
588
589
-
Starting from v9.0.0, when a log backup task is running, if all of the following conditions are met, you can still perform snapshot restore (`br restore [full|database|table]`) and allow the restored data to be properly recorded by the ongoing log backup (hereinafter referred to as "log backup"):
589
+
Starting from v8.5.5 and v9.0.0, when a log backup task is running, if all of the following conditions are met, you can still perform snapshot restore (`br restore [full|database|table]`) and allow the restored data to be properly recorded by the ongoing log backup (hereinafter referred to as "log backup"):
590
590
591
591
- The node performing backup and restore operations has the following necessary permissions:
592
592
- Read access to the external storage containing the backup source, for snapshot restore
@@ -604,11 +604,11 @@ If any of the above conditions are not met, you can restore the data by followin
604
604
605
605
>**Note:**
606
606
>
607
-
> When restoring a log backup that contains records of snapshot (full) restore data, you must use BR v9.0.0 or later. Otherwise, restoring the recorded full restore data might fail.
607
+
> When restoring a log backup that contains records of snapshot (full) restore data, you must use BR v8.5.5 or later. Otherwise, restoring the recorded full restore data might fail.
608
608
609
609
### Compatibility between ongoing log backup and PITR operations
610
610
611
-
Starting from TiDB v9.0.0, you can perform PITR operations while a log backup task is running by default. The system automatically handles compatibility between these operations.
611
+
Starting from TiDB v8.5.5 and v9.0.0, you can perform PITR operations while a log backup task is running by default. The system automatically handles compatibility between these operations.
612
612
613
613
#### Important limitation for PITR with ongoing log backup
Copy file name to clipboardExpand all lines: br/br-snapshot-guide.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -153,7 +153,7 @@ When you perform a snapshot backup, BR backs up system tables as tables with the
153
153
- Starting from BR v5.1.0, when you back up snapshots, BR automatically backs up the **system tables** in the `mysql` schema, but does not restore these system tables by default.
154
154
- Starting from v6.2.0, BR lets you specify `--with-sys-table` to restore **data in some system tables**.
155
155
- Starting from v7.6.0, BR enables `--with-sys-table` by default, which means that BR restores **data in some system tables** by default.
156
-
- Starting from v9.0.0, BR lets you specify `--fast-load-sys-tables` to restore system tables physically. This approach uses the `RENAME TABLE` DDL statement to atomically swap the system tables in the `__TiDB_BR_Temporary_mysql` database with the system tables in the `mysql` database. Unlike the logical restoration of system tables using the `REPLACE INTO` SQL statement, physical restoration completely overwrites the existing data in the system tables.
156
+
- Starting from v8.5.5 and v9.0.0, BR lets you specify `--fast-load-sys-tables` to restore system tables physically. This approach uses the `RENAME TABLE` DDL statement to atomically swap the system tables in the `__TiDB_BR_Temporary_mysql` database with the system tables in the `mysql` database. Unlike the logical restoration of system tables using the `REPLACE INTO` SQL statement, physical restoration completely overwrites the existing data in the system tables.
157
157
158
158
**BR can restore data in the following system tables:**
Copy file name to clipboardExpand all lines: br/br-snapshot-manual.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -129,11 +129,11 @@ tiup br restore full \
129
129
130
130
> **Note:**
131
131
>
132
-
> Starting from v9.0.0, when the `--load-stats` parameter is set to `false`, BR no longer writes statistics for the restored tables to the `mysql.stats_meta` table. After the restore is complete, you can manually execute the [`ANALYZE TABLE`](/sql-statements/sql-statement-analyze-table.md) SQL statement to update the relevant statistics.
132
+
> Starting from v8.5.5 and v9.0.0, when the `--load-stats` parameter is set to `false`, BR no longer writes statistics for the restored tables to the `mysql.stats_meta` table. After the restore is complete, you can manually execute the [`ANALYZE TABLE`](/sql-statements/sql-statement-analyze-table.md) SQL statement to update the relevant statistics.
133
133
134
134
When the backup and restore feature backs up data, it stores statistics in JSON format within the `backupmeta` file. When restoring data, it loads statistics in JSON format into the cluster. For more information, see [LOAD STATS](/sql-statements/sql-statement-load-stats.md).
135
135
136
-
Starting from 9.0.0, BR introduces the `--fast-load-sys-tables` parameter, which is enabled by default. When restoring data to a new cluster using the `br` command-line tool, and the IDs of tables and partitions between the upstream and downstream clusters can be reused (otherwise, BR will automatically fall back to logically load statistics), enabling `--fast-load-sys-tables` lets BR to first restore the statistics-related system tables to the temporary system database `__TiDB_BR_Temporary_mysql`, and then atomically swap these tables with the corresponding tables in the `mysql` database using the `RENAME TABLE` statement.
136
+
Starting from v8.5.5 and v9.0.0, BR introduces the `--fast-load-sys-tables` parameter, which is enabled by default. When restoring data to a new cluster using the `br` command-line tool, and the IDs of tables and partitions between the upstream and downstream clusters can be reused (otherwise, BR will automatically fall back to logically load statistics), enabling `--fast-load-sys-tables` lets BR to first restore the statistics-related system tables to the temporary system database `__TiDB_BR_Temporary_mysql`, and then atomically swap these tables with the corresponding tables in the `mysql` database using the `RENAME TABLE` statement.
Copy file name to clipboardExpand all lines: configure-store-limit.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -54,7 +54,7 @@ tiup ctl:v<CLUSTER_VERSION> pd store limit all 5 add-peer // All stores
54
54
tiup ctl:v<CLUSTER_VERSION> pd store limit all 5 remove-peer // All stores can at most delete 5 peers per minute.
55
55
```
56
56
57
-
Starting from v8.5.5 and v9.0.0, you can set the speed limit for removing-peers operations for all stores of a specific storage engine type, as shown in the following examples:
57
+
Starting from v8.5.5 and v9.0.0, you can set the speed limit for removing-peer operations for all stores of a specific storage engine type, as shown in the following examples:
58
58
59
59
```bash
60
60
tiup ctl:v<CLUSTER_VERSION> pd store limit all engine tikv 5 remove-peer // All TiKV stores can at most remove 5 peers per minute.
Copy file name to clipboardExpand all lines: system-variables.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1775,7 +1775,7 @@ mysql> SELECT job_info FROM mysql.analyze_jobs ORDER BY end_time DESC LIMIT 1;
1775
1775
- If `tidb_ddl_enable_fast_reorg` is set to `OFF`, `ADD INDEX` is executed as a transaction. If there are many update operations such as`UPDATE`and`REPLACE`in the target columns during the `ADD INDEX` execution, a larger batch size indicates a larger probability of transaction conflicts. In this case, it is recommended that you set the batch size to a smaller value. The minimum value is 32.
1776
1776
- If the transaction conflict does not exist, or if `tidb_ddl_enable_fast_reorg` is set to `ON`, you can set the batch size to a large value. This makes data backfilling faster but also increases the write pressure on TiKV. For a proper batch size, you also need to refer to the value of `tidb_ddl_reorg_worker_cnt`. See [Interaction Test on Online Workloads and`ADD INDEX` Operations](https://docs.pingcap.com/tidb/dev/online-workloads-and-add-index-operations) for reference.
1777
1777
- Starting fromv8.3.0, this parameter is supported at the SESSION level. Modifying the parameter at the GLOBAL level will not impact currently running DDL statements. It will only apply to DDLs submitted in new sessions.
1778
-
- Starting fromv8.5.0, you can modify this parameter for a running DDL job by executing `ADMIN ALTER DDL JOBS <job_id> BATCH_SIZE = <new_batch_size>;`.
1778
+
- Starting fromv8.5.0, you can modify this parameter for a running DDL job by executing `ADMIN ALTER DDL JOBS <job_id> BATCH_SIZE = <new_batch_size>;`. For more information, see [`ADMIN ALTER DDL JOBS`](/sql-statements/sql-statement-admin-alter-ddl.md).
1779
1779
1780
1780
### tidb_ddl_reorg_priority
1781
1781
@@ -1851,7 +1851,7 @@ Assume that you have a cluster with 4 TiDB nodes and multiple TiKV nodes. In thi
1851
1851
- Unit: Threads
1852
1852
- This variable is used to set the concurrency of the DDL operation in the `re-organize` phase.
1853
1853
- Starting from v8.3.0, this parameter is supported at the SESSION level. Modifying the parameter at the GLOBAL level will not impact currently running DDL statements. It will only apply to DDLs submitted in new sessions.
1854
-
- Starting from v8.5.0, you can modify this parameter for a running DDL job by executing `ADMIN ALTER DDL JOBS <job_id> THREAD = <new_thread_count>;`.
1854
+
- Starting from v8.5.0, you can modify this parameter for a running DDL job by executing `ADMIN ALTER DDL JOBS <job_id> THREAD = <new_thread_count>;`. For more information, see [`ADMIN ALTER DDL JOBS`](/sql-statements/sql-statement-admin-alter-ddl.md).
1855
1855
1856
1856
### `tidb_enable_fast_create_table` <span class="version-mark">New in v8.0.0</span>
1857
1857
@@ -6407,7 +6407,7 @@ For details, see [Identify Slow Queries](/identify-slow-queries.md).
6407
6407
> - `PARALLEL` and `PARALLEL-FAST` modes are incompatible with [`tidb_tso_client_batch_max_wait_time`](#tidb_tso_client_batch_max_wait_time-new-in-v530) and [`tidb_enable_tso_follower_proxy`](#tidb_enable_tso_follower_proxy-new-in-v530). If either [`tidb_tso_client_batch_max_wait_time`](#tidb_tso_client_batch_max_wait_time-new-in-v530) is set to a non-zero value or [`tidb_enable_tso_follower_proxy`](#tidb_enable_tso_follower_proxy-new-in-v530) is enabled, configuring `tidb_tso_client_rpc_mode` does not take effect, and TiDB always works in `DEFAULT` mode.
6408
6408
> - `PARALLEL` and `PARALLEL-FAST` modes are designed to reduce the average time for retrieving TS in TiDB. In situations with significant latency fluctuations, such as long-tail latency or latency spikes, these two modes might not provide any remarkable performance improvements.
6409
6409
6410
-
### tidb_cb_pd_metadata_error_rate_threshold_ratio <span class="version-mark">New in v9.0.0</span>
6410
+
### tidb_cb_pd_metadata_error_rate_threshold_ratio <span class="version-mark">New in v8.5.5 and v9.0.0</span>
Copy file name to clipboardExpand all lines: tidb-configuration-file.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -640,7 +640,7 @@ Configuration items related to performance.
640
640
### `enable-async-batch-get` <spanclass="version-mark">New in v8.5.5 and v9.0.0</span>
641
641
642
642
+ Controls whether TiDB uses asynchronous mode to execute the Batch Get operator. Using asynchronous mode can reduce goroutine overhead and provide better performance. Generally, there is no need to modify this configuration item.
643
-
+ Default value: `true`
643
+
+ Default value: `true` for v9.0.0 and later versions. In v8.5.5, the default value is `false`.
0 commit comments