Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
2225bb9
v8.5.5 br: add variable `tidb_advancer_check_point_lag_limit` to cont…
ti-chi-bot Dec 29, 2025
dd233dc
v8.5.5: Add circuit breaker variable (#20129) (#22171)
ti-chi-bot Dec 29, 2025
dd0fb09
v8.5.5 br: add compact log backup (#20342) (#22209)
ti-chi-bot Dec 29, 2025
e9a4dd1
v8.5.5 br: add compatibility between log backup and PITR (#20485) (#2…
ti-chi-bot Dec 29, 2025
8e9d2d2
Merge branch 'release-8.5' into feature/preview-v8.5.5
qiancai Jan 6, 2026
3e6d7e6
v8.5.5 br: remove outdated PITR limitation (#22262) (#22264)
ti-chi-bot Jan 6, 2026
2e6ca82
v8.5.5 br: pitr filter feature release doc (#21109) (#22199)
ti-chi-bot Jan 6, 2026
17b506f
v8.5.5 restore: update the definition of the parameter --load-stats a…
ti-chi-bot Jan 6, 2026
15ee7a0
v8.5.5 br: support pitr filter and concurrent restore (#21835) (#22201)
ti-chi-bot Jan 6, 2026
f9a1834
v8.5.5 br: pitr restore mode (#21254) (#22238)
ti-chi-bot Jan 6, 2026
3d1c460
v8.5.5 br: provide a storage target option for BR restore checkpoint …
ti-chi-bot Jan 7, 2026
14f5dc5
v8.5.5 br: improve visualization of BR (#20493) (#22237)
ti-chi-bot Jan 8, 2026
846f118
Merge branch 'release-8.5' into feature/preview-v8.5.5
qiancai Jan 13, 2026
157a642
v8.5.5 br: add ddl job none error report (#22300) (#22308)
ti-chi-bot Jan 13, 2026
5d9eb98
v8.5.5: add config for graceful shutdown (#22158)
hujiatao0 Jan 13, 2026
d68ac51
Merge branch 'release-8.5' into feature/preview-v8.5.5
qiancai Jan 13, 2026
0a8c7a1
v8.5.5: Add index lookup push down content (#22196) (#22252)
ti-chi-bot Jan 13, 2026
fd00a09
v8.5.5: include storage engines in slow query logs and statements sum…
ti-chi-bot Jan 13, 2026
e33ffab
v8.5.5: add doc for async-batch-get (#22152) (#22311)
ti-chi-bot Jan 13, 2026
7c03ee9
Merge branch 'release-8.5' into feature/preview-v8.5.5
qiancai Jan 13, 2026
28c2a0d
v8.5.5 pd,tidb: support affinity schedule (#22270) (#22315)
ti-chi-bot Jan 14, 2026
d0f71b3
v8.5.5: add store limit support (#22297) (#22314)
ti-chi-bot Jan 14, 2026
d6c3e74
v8.5.5: Add unified.cpu-threshold config (#22167) (#22312)
ti-chi-bot Jan 14, 2026
e42b42f
v8.5.5 scheduler: network slow store scheduler enhancement (#22269) (…
ti-chi-bot Jan 14, 2026
da93fa6
v8.5: bump up the latest version to v8.5.5 (#22322)
qiancai Jan 14, 2026
91a88fa
Merge branch 'release-8.5' into feature/preview-v8.5.5
qiancai Jan 14, 2026
b5b1386
v8.5.5 br: add a new authentication method for Azure (#22267) (#22324)
ti-chi-bot Jan 14, 2026
2c276f6
Update upgrade-tidb-using-tiup.md
qiancai Jan 15, 2026
392f679
Update upgrade-tidb-using-tiup.md
qiancai Jan 15, 2026
e466dbd
Update upgrade-tidb-using-tiup.md
qiancai Jan 15, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -246,6 +246,7 @@
- [Use Overview](/br/br-use-overview.md)
- [Snapshot Backup and Restore Guide](/br/br-snapshot-guide.md)
- [Log Backup and PITR Guide](/br/br-pitr-guide.md)
- [Compact Log Backup](/br/br-compact-log-backup.md)
- [Use Cases](/br/backup-and-restore-use-cases.md)
- [Backup Storages](/br/backup-and-restore-storages.md)
- BR CLI Manuals
Expand Down Expand Up @@ -880,6 +881,7 @@
- [`SET ROLE`](/sql-statements/sql-statement-set-role.md)
- [`SET TRANSACTION`](/sql-statements/sql-statement-set-transaction.md)
- [`SET <variable>`](/sql-statements/sql-statement-set-variable.md)
- [`SHOW AFFINITY`](/sql-statements/sql-statement-show-affinity.md)
- [`SHOW ANALYZE STATUS`](/sql-statements/sql-statement-show-analyze-status.md)
- [`SHOW [BACKUPS|RESTORES]`](/sql-statements/sql-statement-show-backups.md)
- [`SHOW BINDINGS`](/sql-statements/sql-statement-show-bindings.md)
Expand Down Expand Up @@ -997,6 +999,7 @@
- [Temporary Tables](/temporary-tables.md)
- [Cached Tables](/cached-tables.md)
- [FOREIGN KEY Constraints](/foreign-key.md)
- [Table-Level Data Affinity](/table-affinity.md)
- Character Set and Collation
- [Overview](/character-set-and-collation.md)
- [GBK](/character-set-gbk.md)
Expand Down
4 changes: 3 additions & 1 deletion best-practices/pd-scheduling-best-practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,9 @@ If a TiKV node fails, PD defaults to setting the corresponding node to the **dow

Practically, if a node failure is considered unrecoverable, you can immediately take it offline. This makes PD replenish replicas soon in another node and reduces the risk of data loss. In contrast, if a node is considered recoverable, but the recovery cannot be done in 30 minutes, you can temporarily adjust `max-store-down-time` to a larger value to avoid unnecessary replenishment of the replicas and resources waste after the timeout.

In TiDB v5.2.0, TiKV introduces the mechanism of slow TiKV node detection. By sampling the requests in TiKV, this mechanism works out a score ranging from 1 to 100. A TiKV node with a score higher than or equal to 80 is marked as slow. You can add [`evict-slow-store-scheduler`](/pd-control.md#scheduler-show--add--remove--pause--resume--config--describe) to detect and schedule slow nodes. If only one TiKV is detected as slow, and the slow score reaches the limit (80 by default), the Leader in this node will be evicted (similar to the effect of `evict-leader-scheduler`).
Starting from TiDB v5.2.0, TiKV introduces a mechanism to detect slow-disk nodes. By sampling the requests in TiKV, this mechanism works out a score ranging from 1 to 100. A TiKV node with a score higher than or equal to 80 is marked as slow. You can add [`evict-slow-store-scheduler`](/pd-control.md#scheduler-show--add--remove--pause--resume--config--describe) to schedule slow nodes. If only one TiKV node is detected as slow, and its slow score reaches the limit (80 by default), the Leaders on that node will be evicted (similar to the effect of `evict-leader-scheduler`).

Starting from v8.5.5, TiKV introduces a mechanism to detect slow-network nodes. Similar to slow-disk node detection, this mechanism identifies slow nodes by probing network latency between TiKV nodes and calculating a score. You can enable this mechanism using [`enable-network-slow-store`](/pd-control.md#scheduler-config-evict-slow-store-scheduler).

> **Note:**
>
Expand Down
1 change: 0 additions & 1 deletion br/backup-and-restore-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@ This section describes the prerequisites for using TiDB backup and restore, incl
### Restrictions

- PITR only supports restoring data to **an empty cluster**.
- PITR only supports cluster-level restore and does not support database-level or table-level restore.
- PITR does not support restoring the data of user tables or privilege tables from system tables.
- BR does not support running multiple backup tasks on a cluster **at the same time**.
- It is not recommended to back up tables that are being restored, because the backed-up data might be problematic.
Expand Down
60 changes: 60 additions & 0 deletions br/backup-and-restore-storages.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,6 +202,66 @@ You can configure the account used to access GCS by specifying the access key. I
--storage "azure://external/backup-20220915?account-name=${account-name}"
```

- Method 4: Use Azure managed identities

Starting from v8.5.5, if your TiDB cluster and BR are running in an Azure Virtual Machine (VM) or Azure Kubernetes Service (AKS) environment and Azure managed identities have been assigned to the nodes, you can use Azure managed identities for authentication.

Before using this method, ensure that you have granted the permissions (such as `Storage Blob Data Contributor`) to the corresponding managed identity to access the target storage account in the [Azure Portal](https://azure.microsoft.com/).

- **System-assigned managed identity**:

When using a system-assigned managed identity, there is no need to configure any Azure-related environment variables. You can run the BR backup command directly.

```shell
tiup br backup full -u "${PD_IP}:2379" \
--storage "azure://external/backup-20220915?account-name=${account-name}"
```

> **Note:**
>
> Ensure that the `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_CLIENT_SECRET` environment variables are **not** set in the runtime environment. Otherwise, the Azure SDK might prioritize other authentication methods, preventing the managed identity from taking effect.

- **User-assigned managed identity**:

When using a user-assigned managed identity, you need to configure the `AZURE_CLIENT_ID` environment variable in the runtime environment of TiKV and BR, set its value to the client ID of the managed identity, and then run the BR backup command. The detailed steps are as follows:

1. Configure the client ID for TiKV when starting with TiUP:

The following steps use the TiKV port `24000` and the systemd service name `tikv-24000` as an example:

1. Open the systemd service editor by running the following command:

```shell
systemctl edit tikv-24000
```

2. Set the `AZURE_CLIENT_ID` environment variable to your managed identity client ID:

```ini
[Service]
Environment="AZURE_CLIENT_ID=<your-client-id>"
```

3. Reload the systemd configuration and restart TiKV:

```shell
systemctl daemon-reload
systemctl restart tikv-24000
```

2. Configure the `AZURE_CLIENT_ID` environment variable for BR:

```shell
export AZURE_CLIENT_ID="<your-client-id>"
```

3. Back up data to Azure Blob Storage using the following BR command:

```shell
tiup br backup full -u "${PD_IP}:2379" \
--storage "azure://external/backup-20220915?account-name=${account-name}"
```

</div>
</SimpleTab>

Expand Down
4 changes: 3 additions & 1 deletion br/backup-and-restore-use-cases.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,9 @@ tiup br restore point --pd="${PD_IP}:2379" \
--full-backup-storage='s3://tidb-pitr-bucket/backup-data/snapshot-20220514000000' \
--restored-ts '2022-05-15 18:00:00+0800'

Full Restore <--------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
Split&Scatter Region <--------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
Download&Ingest SST <--------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
Restore Pipeline <--------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
[2022/05/29 18:15:39.132 +08:00] [INFO] [collector.go:69] ["Full Restore success summary"] [total-ranges=12] [ranges-succeed=xxx] [ranges-failed=0] [split-region=xxx.xxxµs] [restore-ranges=xxx] [total-take=xxx.xxxs] [restore-data-size(after-compressed)=xxx.xxx] [Size=xxxx] [BackupTS={TS}] [total-kv=xxx] [total-kv-size=xxx] [average-speed=xxx]
Restore Meta Files <--------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
Restore KV Files <----------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
Expand Down
82 changes: 78 additions & 4 deletions br/br-checkpoint-restore.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ If your TiDB cluster is large and cannot afford to restore again after a failure

## Implementation principles

The implementation of checkpoint restore is divided into two parts: snapshot restore and log restore. For more information, see [Implementation details](#implementation-details).
The implementation of checkpoint restore is divided into two parts: snapshot restore and log restore. For more information, see [Implementation details: store checkpoint data in the downstream cluster](#implementation-details-store-checkpoint-data-in-the-downstream-cluster) and [Implementation details: store checkpoint data in the external storage](#implementation-details-store-checkpoint-data-in-the-external-storage).

### Snapshot restore

Expand Down Expand Up @@ -65,7 +65,11 @@ After a restore failure, avoid writing, deleting, or creating tables in the clus

Cross-major-version checkpoint recovery is not recommended. For clusters where `br` recovery fails using the Long-Term Support (LTS) versions prior to v8.5.0, recovery cannot be continued with v8.5.0 or later LTS versions, and vice versa.

## Implementation details
## Implementation details: store checkpoint data in the downstream cluster

> **Note:**
>
> Starting from v8.5.5, BR stores checkpoint data in the downstream cluster by default. You can specify an external storage for checkpoint data using the `--checkpoint-storage` parameter.

Checkpoint restore operations are divided into two parts: snapshot restore and PITR restore.

Expand All @@ -81,8 +85,78 @@ If the restore fails and you try to restore backup data with different checkpoin

[PITR (Point-in-time recovery)](/br/br-pitr-guide.md) consists of snapshot restore and log restore phases.

During the initial restore, `br` first enters the snapshot restore phase. This phase follows the same process as the preceding [snapshot restore](#snapshot-restore-1): BR records the checkpoint data, the upstream cluster ID, and BackupTS of the backup data (that is, the start time point `start-ts` of log restore) in the `__TiDB_BR_Temporary_Snapshot_Restore_Checkpoint` database. If restore fails during this phase, you cannot adjust the `start-ts` of log restore when resuming checkpoint restore.
During the initial restore, `br` first enters the snapshot restore phase. BR records the checkpoint data, the upstream cluster ID, BackupTS of the backup data (that is, the start time point `start-ts` of log restore) and the restored time point `restored-ts` of log restore in the `__TiDB_BR_Temporary_Snapshot_Restore_Checkpoint` database. If restore fails during this phase, you cannot adjust the `start-ts` and `restored-ts` of log restore when resuming checkpoint restore.

When entering the log restore phase during the initial restore, `br` creates a `__TiDB_BR_Temporary_Log_Restore_Checkpoint` database in the target cluster. This database records checkpoint data, the upstream cluster ID, and the restore time range (`start-ts` and `restored-ts`). If restore fails during this phase, you need to specify the same `start-ts` and `restored-ts` as recorded in the checkpoint database when retrying. Otherwise, `br` will report an error and prompt that the current specified restore time range or upstream cluster ID is different from the checkpoint record. If the restore cluster has been cleaned, you can manually delete the `__TiDB_BR_Temporary_Log_Restore_Checkpoint` database and retry with a different backup.

Before entering the log restore phase during the initial restore, `br` constructs a mapping of upstream and downstream cluster database and table IDs at the `restored-ts` time point. This mapping is persisted in the system table `mysql.tidb_pitr_id_map` to prevent duplicate allocation of database and table IDs. Deleting data from `mysql.tidb_pitr_id_map` might lead to inconsistent PITR restore data.
Note that before entering the log restore phase during the initial restore, `br` constructs a mapping of upstream and downstream cluster database and table IDs at the `restored-ts` time point. This mapping is persisted in the system table `mysql.tidb_pitr_id_map` to prevent duplicate allocation of database and table IDs. **Deleting data from `mysql.tidb_pitr_id_map` arbitrarily might lead to inconsistent PITR restore data.**

> **Note:**
>
> To ensure compatibility with clusters of earlier versions, starting from v8.5.5, if the system table `mysql.tidb_pitr_id_map` does not exist in the restore cluster, the `pitr_id_map` data will be written to the log backup directory. The file name is `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}`.

## Implementation details: store checkpoint data in the external storage

> **Note:**
>
> Starting from v8.5.5, BR stores checkpoint data in the downstream cluster by default. You can specify an external storage for checkpoint data using the `--checkpoint-storage` parameter. For example:
>
> ```shell
> ./br restore full -s "s3://backup-bucket/backup-prefix" --checkpoint-storage "s3://temp-bucket/checkpoints"
> ```

In the external storage, the directory structure of the checkpoint data is as follows:

- Root path `restore-{downstream-cluster-ID}` uses the downstream cluster ID `{downstream-cluster-ID}` to distinguish between different restore clusters.
- Path `restore-{downstream-cluster-ID}/log` stores log file checkpoint data during the log restore phase.
- Path `restore-{downstream-cluster-ID}/sst` stores checkpoint data of the SST files that are not backed up by log backup during the log restore phase.
- Path `restore-{downstream-cluster-ID}/snapshot` stores checkpoint data during the snapshot restore phase.

```
.
`-- restore-{downstream-cluster-ID}
|-- log
| |-- checkpoint.meta
| |-- data
| | |-- {uuid}.cpt
| | |-- {uuid}.cpt
| | `-- {uuid}.cpt
| |-- ingest_index.meta
| `-- progress.meta
|-- snapshot
| |-- checkpoint.meta
| |-- checksum
| | |-- {uuid}.cpt
| | |-- {uuid}.cpt
| | `-- {uuid}.cpt
| `-- data
| |-- {uuid}.cpt
| |-- {uuid}.cpt
| `-- {uuid}.cpt
`-- sst
`-- checkpoint.meta
```

Checkpoint restore operations are divided into two parts: snapshot restore and PITR restore.

### Snapshot restore

During the initial restore, `br` creates a `restore-{downstream-cluster-ID}/snapshot` path in the specified external storage. In this path, `br` records checkpoint data, the upstream cluster ID, and the BackupTS of the backup data.

If the restore fails, you can retry it using the same command. `br` will automatically read the checkpoint information from the specified external storage path and resume from the last restore point.

If the restore fails and you try to restore backup data with different checkpoint information to the same cluster, `br` reports an error. It indicates that the current upstream cluster ID or BackupTS is different from the checkpoint record. If the restore cluster has been cleaned, you can manually clean up the checkpoint data in the external storage or specify another external storage path to store checkpoint data, and retry with a different backup.

### PITR restore

[PITR (Point-in-time recovery)](/br/br-pitr-guide.md) consists of snapshot restore and log restore phases.

During the initial restore, `br` first enters the snapshot restore phase. BR records the checkpoint data, the upstream cluster ID, BackupTS of the backup data (that is, the start time point `start-ts` of log restore) and the restored time point `restored-ts` of log restore in the `restore-{downstream-cluster-ID}/snapshot` path. If restore fails during this phase, you cannot adjust the `start-ts` and `restored-ts` of log restore when resuming checkpoint restore.

When entering the log restore phase during the initial restore, `br` creates a `restore-{downstream-cluster-ID}/log` path in the specified external storage. This path records checkpoint data, the upstream cluster ID, and the restore time range (`start-ts` and `restored-ts`). If restore fails during this phase, you need to specify the same `start-ts` and `restored-ts` as recorded in the checkpoint database when retrying. Otherwise, `br` will report an error and prompt that the current specified restore time range or upstream cluster ID is different from the checkpoint record. If the restore cluster has been cleaned, you can manually clean up the checkpoint data in the external storage or specify another external storage path to store checkpoint data, and retry with a different backup.

Note that before entering the log restore phase during the initial restore, `br` constructs a mapping of the database and table IDs in the upstream and downstream clusters at the `restored-ts` time point. This mapping is persisted in the checkpoint storage with the file name `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}` to prevent duplicate allocation of database and table IDs. **Deleting files from the directory `pitr_id_maps` arbitrarily might lead to inconsistent PITR restore data.**

> **Note:**
>
> To ensure compatibility with clusters of earlier versions, starting from v8.5.5, if the system table `mysql.tidb_pitr_id_map` does not exist in the restore cluster and the `--checkpoint-storage` parameter is not specified, the `pitr_id_map` data will be written to the log backup directory. The file name is `pitr_id_maps/pitr_id_map.cluster_id:{downstream-cluster-ID}.restored_ts:{restored-ts}`.
Loading
Loading