You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: en/backup-restore-cr.md
+1-45Lines changed: 1 addition & 45 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ summary: Learn the fields in the Backup and Restore custom resources (CR).
5
5
6
6
# Backup and Restore Custom Resources
7
7
8
-
This document describes the fields in the `Backup`, `CompactBackup`, `Restore`, and `BackupSchedule` custom resources (CR). You can use these fields to better perform the backup or restore of TiDB clusters on Kubernetes.
8
+
This document describes the fields in the `Backup`and `Restore` custom resources (CR). You can use these fields to better perform the backup or restore of TiDB clusters on Kubernetes.
9
9
10
10
## Backup CR fields
11
11
@@ -193,32 +193,6 @@ To back up data for a TiDB cluster on Kubernetes, you can create a `Backup` cust
193
193
* `.spec.local.volume`: the persistent volume configuration.
194
194
* `.spec.local.volumeMount`: the persistent volume mount configuration.
195
195
196
-
## CompactBackup CR fields
197
-
198
-
For TiDB v9.0.0 and later versions, you can use `CompactBackup` to accelerate PITR (Point-in-time recovery). To compact log backup data into structured SST files, you can create a custom `CompactBackup` CR object to define a backup task. The following introduces the fields in the `CompactBackup` CR:
199
-
200
-
* `.spec.startTs`: the start timestamp for log compaction backup.
201
-
* `.spec.endTs`: the end timestamp for log compaction backup.
202
-
* `.spec.concurrency`: the maximum number of concurrent log compaction tasks. The default value is `4`.
203
-
* `.spec.maxRetryTimes`: the maximum number of retries for failed compaction tasks. The default value is `6`.
204
-
* `.spec.toolImage`: the tool image used by `CompactBackup`. BR is the only tool image used in `CompactBackup`. When using BR for backup, you can specify the BR version with this field:
205
-
- If not specified or left empty, the `pingcap/br:${tikv_version}` image is used for backup by default.
206
-
- If a BR version is specified, such as `.spec.toolImage: pingcap/br:v9.0.0`, the image of the specified version is used for backup.
207
-
- If an image is specified without a version, such as `.spec.toolImage: private/registry/br`, the `private/registry/br:${tikv_version}` image is used for backup.
208
-
209
-
* `.spec.env`: the environment variables for the Pod that runs the compaction task.
210
-
* `.spec.affinity`: the affinity configuration for the Pod that runs the compaction task. For details on affinity, refer to [Affinity and anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity).
211
-
* `.spec.tolerations`: specifies that the Pod that runs the compaction task can schedule onto nodes with matching [taints](https://kubernetes.io/docs/reference/glossary/?all=true#term-taint). For details on taints and tolerations, refer to [Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/).
212
-
* `.spec.podSecurityContext`: the security context configuration for the Pod that runs the compaction task, which allows the Pod to run as a non-root user. For details on `podSecurityContext`, refer to [Run Containers as a Non-root User](containers-run-as-non-root-user.md).
213
-
* `.spec.priorityClassName`: the name of the priority class for the Pod that runs the compaction task, which sets priority for the Pod. For details on priority classes, refer to [Pod Priority and Preemption](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/).
214
-
* `.spec.imagePullSecrets`: the [imagePullSecrets](https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) for the Pod that runs the compaction task.
215
-
* `.spec.serviceAccount`: the name of the ServiceAccount used for compact.
216
-
* `.spec.useKMS`: whether to use AWS-KMS to decrypt the S3 storage key used for the backup.
217
-
* `.spec.br`: BR-related configuration. For more information, refer to [BR fields](#br-fields).
218
-
* `.spec.s3`: S3-related configuration. For more information, refer to [S3 storage fields](#s3-storage-fields).
219
-
* `.spec.gcs`: GCS-related configuration. For more information, refer to [GCS fields](#gcs-fields).
220
-
* `.spec.azblob`: Azure Blob Storage-related configuration. For more information, refer to [Azure Blob Storage fields](#azure-blob-storage-fields).
221
-
222
196
## Restore CR fields
223
197
224
198
To restore data to a TiDB cluster on Kubernetes, you can create a `Restore` CR object. For detailed restore process, refer to documents listed in [Restore data](backup-restore-overview.md#restore-data). This section introduces the fields in the `Restore` CR.
@@ -258,21 +232,3 @@ To restore data to a TiDB cluster on Kubernetes, you can create a `Restore` CR o
258
232
* `.spec.gcs`: GCS-related configuration. Refer to [GCS fields](#gcs-fields).
* `.spec.local`: persistent volume-related configuration. Refer to [Local storage fields](#local-storage-fields).
261
-
262
-
## BackupSchedule CR fields
263
-
264
-
The `backupSchedule` configuration consists of three parts: the configuration of the snapshot backup `backupTemplate`, the configuration of the log backup `logBackupTemplate`, and the unique configuration of `backupSchedule`.
265
-
266
-
* `backupTemplate`: the configuration of the snapshot backup. Specifies the configuration related to the cluster and remote storage of the snapshot backup, which is the same as the `spec` configuration of [the `Backup` CR](#backup-cr-fields).
267
-
* `logBackupTemplate`: the configuration of the log backup. Specifies the configuration related to the cluster and remote storage of the log backup, which is the same as the `spec` configuration of [the `Backup` CR](#backup-cr-fields). The log backup is created and deleted along with `backupSchedule` and recycled according to `.spec.maxReservedTime`. The log backup name is saved in `status.logBackup`.
268
-
* `compactBackupTemplate`: the configuration template of the log compaction backup. The fields are the same as those in the `spec` configuration of [the `CompactBackup` CR](#compactbackup-cr-fields). The compaction backup is created and deleted along with `backupSchedule`. The log backup names are stored in `status.logBackup`. The storage settings of the compaction backup should be the same as that of `logBackupTemplate` in the same `backupSchedule`.
269
-
270
-
> **Note:**
271
-
>
272
-
> Before you delete the log backup data, you need to stop the log backup task to avoid resource waste or the inability to restart the log backup task in the future because the log backup task in TiKV is not stopped.
273
-
274
-
* The unique configuration items of `backupSchedule` are as follows:
275
-
* `.spec.maxBackups`: a backup retention policy, which determines the maximum number of backup files to be retained. When the number of backup files exceeds this value, the outdated backup file will be deleted. If you set this field to `0`, all backup items are retained.
276
-
* `.spec.maxReservedTime`: a backup retention policy based on time. For example, if you set the value of this field to `24h`, only backup files within the recent 24 hours are retained. All backup files older than this value are deleted. For the time format, refer to [`func ParseDuration`](https://golang.org/pkg/time/#ParseDuration). If you have set `.spec.maxBackups` and `.spec.maxReservedTime` at the same time, the latter takes effect.
277
-
* `.spec.schedule`: the time scheduling format of Cron. Refer to [Cron](https://en.wikipedia.org/wiki/Cron) for details.
278
-
* `.spec.pause`: `false` by default. If this field is set to `true`, the scheduled scheduling is paused. In this situation, the backup operation will not be performed even if the scheduling time point is reached. During this pause, the backup garbage collection runs normally. If you change `true` to `false`, the scheduled snapshot backup process is restarted. Because currently, log backup does not support pause, this configuration does not take effect for log backup.
Copy file name to clipboardExpand all lines: en/backup-restore-overview.md
+8-5Lines changed: 8 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,28 +39,31 @@ To recover the SST files exported by BR to a TiDB cluster, use BR. For more info
39
39
40
40
## Backup and restore process
41
41
42
-
To make a backup of the TiDB cluster on Kubernetes, you need to create a [`Backup` CR](backup-restore-cr.md#backup-cr-fields) object to describe the backup or create a [`BackupSchedule` CR](backup-restore-cr.md#backupschedule-cr-fields) object to describe a scheduled backup.
42
+
To make a backup of the TiDB cluster on Kubernetes, you need to create a [`Backup` CR](backup-restore-cr.md#backup-cr-fields) object to describe the backup.
43
+
44
+
> **Warning:**
45
+
>
46
+
> Currently, TiDB Operator v2 does not support the `BackupSchedule` CR. To perform scheduled snapshot backups, scheduled log backups, or scheduled compact log backups, use TiDB Operator v1.x and see [BackupSchedule CR fields](https://docs.pingcap.com/tidb-in-kubernetes/v1.6/backup-restore-cr/#backupschedule-cr-fields).
43
47
44
48
To restore data to the TiDB cluster on Kubernetes, you need to create a [`Restore` CR](backup-restore-cr.md#restore-cr-fields) object to describe the restore.
45
49
46
50
After creating the CR object, according to your configuration, TiDB Operator chooses the corresponding tool and performs the backup or restore.
47
51
48
52
## Delete the Backup CR
49
53
50
-
You can delete the `Backup` CR or `BackupSchedule` CR by running the following commands:
54
+
You can delete the `Backup` CR by running the following commands:
If you set the value of `spec.cleanPolicy` to `Delete`, TiDB Operator cleans the backup data when it deletes the CR.
58
61
59
62
TiDB Operator automatically attempts to stop running log backup tasks when you delete the Custom Resource (CR). This automatic stop feature only applies to log backup tasks that are running normally and does not handle tasks in an error or failed state.
60
63
61
-
In such cases, if you need to delete the namespace, it is recommended that you first delete all the `Backup`or `BackupSchedule` CRs and then delete the namespace.
64
+
In such cases, if you need to delete the namespace, it is recommended that you first delete the `Backup`CR and then delete the namespace.
62
65
63
-
If you delete the namespace before you delete the `Backup`or `BackupSchedule`CR, TiDB Operator will keep creating jobs to clean the backup data. However, because the namespace is in `Terminating` state, TiDB Operator fails to create such a job, which causes the namespace to be stuck in this state.
66
+
If you delete the namespace before you delete the `Backup` CR, TiDB Operator will keep creating jobs to clean the backup data. However, because the namespace is in `Terminating` state, TiDB Operator fails to create such a job, which causes the namespace to be stuck in this state.
64
67
65
68
To address this issue, delete `finalizers` by running the following command:
0 commit comments