diff --git a/docs/TROUBLESHOOTING.md b/docs/TROUBLESHOOTING.md index be929850eb..070f9d730e 100644 --- a/docs/TROUBLESHOOTING.md +++ b/docs/TROUBLESHOOTING.md @@ -12,7 +12,6 @@ If you need help, first search if there is [already an issue filed](https://issu 1. [Debugging OpenShift Virtualization backup/restore](virtualization_troubleshooting.md) 1. [Debugging OADP Self Service](self-service_troubleshooting.md) 1. [Deleting Backups](#deleting-backups) -1. [Debugging Data Mover (OADP 1.2 or below)](https://github.com/migtools/volume-snapshot-mover/blob/master/docs/troubleshooting.md) 1. [OpenShift ROSA STS and OADP installation](https://github.com/rh-mobb/documentation/blob/main/content/docs/misc/oadp/rosa-sts/_index.md) 1. [Common Issues and Misconfigurations](#common-issues-and-misconfigurations) - [Credentials Not Properly Formatted](#credentials-secret-not-properly-formatted) @@ -36,10 +35,6 @@ If you need help, first search if there is [already an issue filed](https://issu ``` oc logs -f deploy/velero -n openshift-adp ``` - - If Data Mover (OADP 1.2 or below) is enabled, check the volume-snapshot-logs - ``` - oc logs -f deployment.apps/volume-snapshot-mover -n openshift-adp - ``` 1. Velero commands - Alias the velero command: @@ -77,10 +72,6 @@ This section includes how to debug a failed restore. For more specific issues re ``` oc logs -f deployment.apps/velero -n openshift-adp ``` - If Data Mover (OADP 1.2 or below) is enabled, check the volume-snapshot-logs - ``` - oc logs -f deployment.apps/volume-snapshot-mover -n openshift-adp - ``` 1. Velero commands - Alias the velero command: @@ -250,40 +241,11 @@ oc delete backuprepository -n openshift-adp ### Issue with Backup/Restore of DeploymentConfig with volumes or restore hooks -- (OADP 1.3+) **Error:** `DeploymentConfigs restore with spec.Replicas==0 or DC pods fail to restart if they crash if using DC with volumes or restore hooks` - - **Solution:** - - Solution is the same as in the (OADP 1.1+), except it applies to the use case if you are restoring DeploymentConfigs and have either volumes or post-restore hooks regardless of the backup method. - -- (OADP 1.1+) **Error:** `DeploymentConfigs restore with spec.Replicas==0 or DC pods fail to restart if they crash if using Restic/Kopia restores or restore hooks` - - **Solution:** - - This is expected behavior on restore if you are restoring DeploymentConfigs and are either using Restic or Kopia for volume restore or you have post-restore hooks. The pod and DC plugins make these modifications to ensure that Restic or Kopia and hooks work properly, and [dc-post-restore.sh](../docs/scripts/dc-post-restore.sh) should have been run immediately after a successful restore. Usage for this script is `dc-post-restore.sh ` - -- (OADP 1.0.z) **Error:** `Using Restic as backup method causes PartiallyFailed/Failed errors in the Restore or post-restore hooks fail to execute` +- **Error:** `DeploymentConfigs restore with spec.Replicas==0 or DC pods fail to restart if they crash if using DC with volumes or restore hooks` **Solution:** - The changes in the backup/restore process for mitigating this error would be a two step restore process where, in the first step we would perform a restore excluding the replicationcontroller and deploymentconfig resources, and the second step would involve a restore including these resources. The backup and restore commands are given below for more clarity. (The examples given below are a use case for backup/restore of a target namespace, for other cases a similar strategy can be followed). - - Please note that this is a temporary fix for this issue and there are ongoing discussions to solve it. - - Step 1: Initiate the backup as any normal backup for restic. - ``` - velero create backup -n openshift-adp --include-namespaces= - ``` - - Step 2: Initiate a restore excluding the replicationcontroller and deploymentconfig resources. - ``` - velero restore create --from-backup= -n openshift-adp --include-namespaces --exclude-resources replicationcontroller,deploymentconfig,templateinstances.template.openshift.io --restore-volumes=true - ``` - - Step 3: Initiate a restore including the replicationcontroller and deploymentconfig resources. - ``` - velero restore create --from-backup= -n openshift-adp --include-namespaces --include-resources replicationcontroller,deploymentconfig,templateinstances.template.openshift.io --restore-volumes=true - ``` + This is expected behavior on restore if you are restoring DeploymentConfigs and have either volumes or post-restore hooks. The pod and DC plugins make these modifications to ensure that Restic or Kopia and hooks work properly, and [dc-post-restore.sh](../docs/scripts/dc-post-restore.sh) should have been run immediately after a successful restore. Usage for this script is `dc-post-restore.sh ` ### New Restic Backup Partially Failing After Clearing Bucket diff --git a/docs/config/plugins.md b/docs/config/plugins.md index d0b318e6dd..cdc195d399 100644 --- a/docs/config/plugins.md +++ b/docs/config/plugins.md @@ -18,7 +18,6 @@ installing Velero: - `OpenShift` [OpenShift Velero Plugin](https://github.com/openshift/openshift-velero-plugin) - `CSI` [Plugins for CSI](https://github.com/vmware-tanzu/velero-plugin-for-csi) - `kubevirt` [Plugins for Kubevirt](https://github.com/kubevirt/kubevirt-velero-plugin) - - `VSM (OADP 1.2 or below)` [Plugin for Volume-Snapshot-Mover](https://github.com/migtools/velero-plugin-for-vsm) Note that only one of `AWS` and `Legacy AWS` may be installed at the same time. `Legacy AWS` is intended for use with certain S3 providers that do not support the V2 AWS SDK APIs used in the `AWS` plugin. diff --git a/docs/credentials.md b/docs/credentials.md index 00465beb4d..f5ccdf1546 100644 --- a/docs/credentials.md +++ b/docs/credentials.md @@ -10,7 +10,6 @@ 1. [BSL and VSL share credentials for one provider](#backupstoragelocation-and-volumesnapshotlocation-share-credentials-for-one-provider) 2. [BSL and VSL use the same provider but use different credentials](#backupstoragelocation-and-volumesnapshotlocation-use-the-same-provider-but-use-different-credentials) 3. [No BSL specified but the plugin for the provider exists](#no-backupstoragelocation-specified-but-the-plugin-for-the-provider-exists) -5. [Creating a Secret: OADP with VolumeSnapshotMover](#creating-a-secret-for-volumesnapshotmover) ### Creating a Secret for OADP @@ -214,22 +213,3 @@ spec: If you don't need volumesnapshotlocation, you will not need to create a VSL credentials. If you need `VolumeSnapshotLocation`, regardless of the `noDefaultBackupLocation` setting, you will need a to create VSL credentials. - - -### Creating a Secret for volumeSnapshotMover (OADP 1.2 or below) - -VolumeSnapshotMover requires a restic secret. It can be configured as so: - -``` -apiVersion: v1 -kind: Secret -metadata: - name: -type: Opaque -stringData: - # The repository encryption key - RESTIC_PASSWORD: my-secure-restic-password -``` - -- *Note:* `dpa.spec.features.dataMover.credentialName` must match the name of the secret. - Otherwise it will default to the name `dm-credential`. diff --git a/docs/examples/data_mover.md b/docs/examples/data_mover.md deleted file mode 100644 index 67f7ef360a..0000000000 --- a/docs/examples/data_mover.md +++ /dev/null @@ -1,141 +0,0 @@ -

Stateful Application Backup/Restore - VolumeSnapshotMover (OADP 1.2 or below)

-

Relocate Snapshots into your Object Storage Location

- -

Background Information:

-
- -OADP Data Mover enables customers to back up container storage interface (CSI) volume snapshots to a remote object store. When Data Mover is enabled, you can restore stateful applications from the store if a failure, accidental deletion, or corruption of the cluster occurs. OADP Data Mover solution uses the Restic option of VolSync.

- -- The official OpenShift OADP Data Mover documentation can be found [here](https://docs.openshift.com/container-platform/4.12/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.html#oadp-using-data-mover-for-csi-snapshots_backing-up-applications) -- We maintain an up to date FAQ page [here](https://access.redhat.com/articles/5456281) -- Note: Data Mover is a tech preview feature in OADP 1.1.x. Data Mover is planned to be fully supported by Red Hat in the OADP 1.2.0 release. -- Note: We recommend customers using OADP 1.2.x Data Mover to backup and restore ODF CephFS volumes, upgrade or install OCP 4.12 for improved performance. OADP Data Mover can leverage CephFS shallow volumes in OCP 4.12+ which based on our testing improves the performance of backup times. - - [CephFS ROX details](https://issues.redhat.com/browse/RHSTOR-4287) - - [Provisioning and mounting CephFS snapshot-backed volumes](https://github.com/ceph/ceph-csi/blob/devel/docs/cephfs-snapshot-backed-volumes.md) - -

Prerequisites:

- -
- -- Have a stateful application running in a separate namespace. - -- Follow instructions for installing the OADP operator and creating an -appropriate `volumeSnapshotClass` and `storageClass`found [here](/docs/examples/CSI/csi_example.md). - -- Install the VolSync operator using OLM. - -Note: For OADP 1.2 you are not required to annotate the openshift-adp namespace (OADP Operator install namespace) with `volsync.backube/privileged-movers='true'`. This action -will be automatically performed by the Operator when the datamover feature is enabled. - -![Volsync_install](/docs/images/volsync_install.png) - -- We will be using VolSync's Restic option, hence configure a restic secret: - -``` -apiVersion: v1 -kind: Secret -metadata: - name: -type: Opaque -stringData: - # The repository encryption key - RESTIC_PASSWORD: my-secure-restic-password -``` - -- Create a DPA similar to below: - - Add the restic secret name from the previous step to your DPA CR in `spec.features.dataMover.credentialName`. - If this step is not completed then it will default to the secret name `dm-credential`. - - - Note the CSI and VSM as `defaultPlugins` and `dataMover.enable` flag. - - -``` -apiVersion: oadp.openshift.io/v1alpha1 -kind: DataProtectionApplication -metadata: - name: velero-sample - namespace: openshift-adp -spec: - features: - dataMover: - enable: true - credentialName: - backupLocations: - - velero: - config: - profile: default - region: us-east-1 - credential: - key: cloud - name: cloud-credentials - default: true - objectStorage: - bucket: - prefix: - provider: aws - configuration: - nodeAgent: - enable: false - uploaderType: restic - velero: - defaultPlugins: - - openshift - - aws - - csi - - vsm -``` - -
- -

For Backup

- -- Create a backup CR: - -``` -apiVersion: velero.io/v1 -kind: Backup -metadata: - name: - namespace: -spec: - includedNamespaces: - - - storageLocation: velero-sample-1 -``` - -- Wait several minutes and check the VolumeSnapshotBackup CR status for `completed`: - -`oc get vsb -n ` - -`oc get vsb -n -ojsonpath="{.status.phase}` - -- There should now be a snapshot in the object store that was given in the restic secret. -- You can check for this snapshot in your targeted `backupStorageLocation` with a -prefix of `/` - -

For Restore

- -- Make sure the application namespace is deleted, as well as the volumeSnapshotContent - that was created by the Velero CSI plugin. - -- Create a restore CR: - -``` -apiVersion: velero.io/v1 -kind: Restore -metadata: - name: - namespace: -spec: - backupName: -``` - -- Wait several minutes and check the VolumeSnapshotRestore CR status for `completed`: - -`oc get vsr -n ` - -`oc get vsr -n -ojsonpath="{.status.phase}` - -- Check that your application data has been restored: - -`oc get route -n -ojsonpath="{.spec.host}"` diff --git a/docs/examples/datamover_advanced_voloptions.md b/docs/examples/datamover_advanced_voloptions.md deleted file mode 100644 index 09d773c390..0000000000 --- a/docs/examples/datamover_advanced_voloptions.md +++ /dev/null @@ -1,369 +0,0 @@ -#

OADP Data Mover 1.2 Advanced Volume Options

- - -- The official OpenShift OADP Data Mover documentation can be found [here](https://docs.openshift.com/container-platform/4.13/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.html#oadp-using-data-mover-for-csi-snapshots_backing-up-applications) -- We maintain an up to date FAQ page [here](https://access.redhat.com/articles/5456281) - -

Background Information:

- - -OADP Data Mover 1.2 leverages some of the recently added features of Ceph to be -performant in large scale environments, one being the -[shallow copy](https://github.com/ceph/ceph-csi/blob/devel/docs/design/proposals/cephfs-snapshot-shallow-ro-vol.md) -method, which is available > OCP 4.11. This feature requires use of the Data Mover -1.2 feature for volumeOptions so that other storageClasses and accessModes can be -used other than what is found on the source PVC. - -1. [Prerequisites](#pre-reqs) -2. [CephFS with ShallowCopy](#shallowcopy) -3. [CephFS and CephRBD Split Volumes](#fsrbd) - -

Prerequisites:

- -- OCP > 4.11 - -- OADP operator and a credentials secret are created. Follow - [these steps](/docs/install_olm.md) for installation instructions. - -- A CephFS and a CephRBD `StorageClass` and a `VolumeSnapshotClass` - - Installing ODF will create these in your cluster: - -### CephFS VolumeSnapshotClass and StorageClass: - -**Note:** The deletionPolicy, annotations, and labels - -```yml -apiVersion: snapshot.storage.k8s.io/v1 -deletionPolicy: Retain # <--- Note the Retain Policy -driver: openshift-storage.cephfs.csi.ceph.com -kind: VolumeSnapshotClass -metadata: - annotations: - snapshot.storage.kubernetes.io/is-default-class: 'true' # <--- Note the default - labels: - velero.io/csi-volumesnapshot-class: 'true' # <--- Note the velero label - name: ocs-storagecluster-cephfsplugin-snapclass -parameters: - clusterID: openshift-storage - csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner - csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage -``` - -**Note:** The annotations -```yml -kind: StorageClass -apiVersion: storage.k8s.io/v1 -metadata: - name: ocs-storagecluster-cephfs - annotations: - description: Provides RWO and RWX Filesystem volumes - storageclass.kubernetes.io/is-default-class: 'true' # <--- Note the default -provisioner: openshift-storage.cephfs.csi.ceph.com -parameters: - clusterID: openshift-storage - csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner - csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage - csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node - csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage - csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner - csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage - fsName: ocs-storagecluster-cephfilesystem -reclaimPolicy: Delete -allowVolumeExpansion: true -volumeBindingMode: Immediate -``` - -### CephRBD VolumeSnapshotClass and StorageClass: - -**Note:** The deletionPolicy, and labels -```yml -apiVersion: snapshot.storage.k8s.io/v1 -deletionPolicy: Retain # <--- Note: the Retain Policy -driver: openshift-storage.rbd.csi.ceph.com -kind: VolumeSnapshotClass -metadata: - labels: - velero.io/csi-volumesnapshot-class: 'true' # <--- Note velero - name: ocs-storagecluster-rbdplugin-snapclass -parameters: - clusterID: openshift-storage - csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner - csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage -``` - -```yml -kind: StorageClass -apiVersion: storage.k8s.io/v1 -metadata: - name: ocs-storagecluster-ceph-rbd - annotations: - description: 'Provides RWO Filesystem volumes, and RWO and RWX Block volumes' -provisioner: openshift-storage.rbd.csi.ceph.com -parameters: - csi.storage.k8s.io/fstype: ext4 - csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage - csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner - csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node - csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner - imageFormat: '2' - clusterID: openshift-storage - imageFeatures: layering - csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage - pool: ocs-storagecluster-cephblockpool - csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage -reclaimPolicy: Delete -allowVolumeExpansion: true -volumeBindingMode: Immediate -``` - -- Create an additional CephFS `StorageClass` to make use of the `shallowCopy` feature: - -```yml -kind: StorageClass -apiVersion: storage.k8s.io/v1 -metadata: - name: ocs-storagecluster-cephfs-shallow - annotations: - description: Provides RWO and RWX Filesystem volumes - storageclass.kubernetes.io/is-default-class: 'false' -provisioner: openshift-storage.cephfs.csi.ceph.com -parameters: - csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage - csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner - csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node - csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner - clusterID: openshift-storage - fsName: ocs-storagecluster-cephfilesystem - csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage - backingSnapshot: 'true' # <--- shallowCopy - csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage -reclaimPolicy: Delete -allowVolumeExpansion: true -volumeBindingMode: Immediate -``` - -- **Notes**: - - Make sure the default `VolumeSnapshotClass` and `StorageClass` are the same provisioner - - The `VolumeSnapshotClass` must have the `deletionPloicy` set to Retain - - The `VolumeSnapshotClasses` must have the label `velero.io/csi-volumesnapshot-class: 'true'` - -- Install the latest VolSync operator using OLM. - -![Volsync_install](/docs/images/volsync_install.png) - -- We will be using VolSync's Restic option, hence configure a restic secret: - -```yml -apiVersion: v1 -kind: Secret -metadata: - name: -type: Opaque -stringData: - # The repository encryption key - RESTIC_PASSWORD: my-secure-restic-password -``` - -

Backup/Restore with CephFS ShallowCopy

- -- Please ensure that a stateful application is running in a separate namespace with PVCs using - CephFS as the provisioner - -- Please ensure the default `StorageClass` and `VolumeSnapshotClass` as cephFS, as shown - in the [prerequisites](#pre-reqs) - -- **Helpful Commands**: - - Check the VolumeSnapshotClass retain policy: - ``` - oc get volumesnapshotclass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"Retention Policy: "}{.deletionPolicy}{"\n"}{end}' - ``` - Check the VolumeSnapShotClass lables: - ``` - oc get volumesnapshotclass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"labels: "}{.metadata.labels}{"\n"}{end}' - ``` - Check the StorageClass annotations: - ``` - oc get storageClass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"annotations: "}{.metadata.annotations}{"\n"}{end}' - ``` - -- Create a DPA similar to below: - - Add the restic secret name from the previous step to your DPA CR - in `spec.features.dataMover.credentialName`. If this step is not completed - then it will default to the secret name `dm-credential`. - - -```yml -apiVersion: oadp.openshift.io/v1alpha1 -kind: DataProtectionApplication -metadata: - name: velero-sample - namespace: openshift-adp -spec: - backupLocations: - - velero: - config: - profile: default - region: us-east-1 - credential: - key: cloud - name: cloud-credentials - default: true - objectStorage: - bucket: - prefix: velero - provider: aws - configuration: - nodeAgent: - enable: false # [true, false] - uploaderType: restic # [restic, kopia] - velero: - defaultPlugins: - - openshift - - aws - - csi - - vsm - features: - dataMover: - credentialName: - enable: true - volumeOptionsForStorageClasses: - ocs-storagecluster-cephfs: - sourceVolumeOptions: - accessMode: ReadOnlyMany - cacheAccessMode: ReadWriteMany - cacheStorageClassName: ocs-storagecluster-cephfs - storageClassName: ocs-storagecluster-cephfs-shallow -``` - -
- -

For Backup

- -- Create a backup CR: - -```yml -apiVersion: velero.io/v1 -kind: Backup -metadata: - name: - namespace: -spec: - includedNamespaces: - - - storageLocation: velero-sample-1 -``` - -- Monitor the datamover backup and artifacts via [a debug script](/docs/examples/debug.md) - -OR -- Check the progress of the `volumeSnapshotBackup`(s): - -``` -oc get vsb -n -oc get vsb -n -ojsonpath="{.status.phase}` -``` - -- Wait several minutes and check the VolumeSnapshotBackup CR status for `completed`: - -- There should now be a snapshot(s) in the object store that was given in the restic secret. -- You can check for this snapshot in your targeted `backupStorageLocation` with a -prefix of `/` - -

For Restore

- -- Make sure the application namespace is deleted, as well as any volumeSnapshotContents - that were created during backup. - -- Create a restore CR: - -```yml -apiVersion: velero.io/v1 -kind: Restore -metadata: - name: - namespace: -spec: - backupName: -``` -- Monitor the datamover backup and artifacts via [a debug script](/docs/examples/debug.md) -OR -- Check the `VolumeSnapshotRestore`(s) progress: - -``` -oc get vsr -n -oc get vsr -n -ojsonpath="{.status.phase} -``` - -- Check that your application data has been restored: - -`oc get route -n -ojsonpath="{.spec.host}"` - - -

Backup/Restore with Split Volumes: CephFS and CephRBD

- -- Ensure a stateful application is running in a separate namespace with PVCs provisioned - by both CephFS and CephRBD - -- This assumes cephFS is being used as the default `StorageClass` and - `VolumeSnapshotClass` - -- Create a DPA similar to below: - - Add the restic secret name from the prerequisites to your DPA CR in - `spec.features.dataMover.credentialName`. If this step is not completed then - it will default to the secret name `dm-credential` - - Note: `volumeOptionsForStorageClass` can be defined for multiple storageClasses, - thus allowing a backup to complete with volumes with different providers. - -```yml -apiVersion: oadp.openshift.io/v1alpha1 -kind: DataProtectionApplication -metadata: - name: velero-sample - namespace: openshift-adp -spec: - backupLocations: - - velero: - config: - profile: default - region: us-east-1 - credential: - key: cloud - name: cloud-credentials - default: true - objectStorage: - bucket: - prefix: velero - provider: aws - configuration: - nodeAgent: - enable: false - uploaderType: restic - velero: - defaultPlugins: - - openshift - - aws - - csi - - vsm - features: - dataMover: - credentialName: - enable: true - volumeOptionsForStorageClasses: - ocs-storagecluster-cephfs: - sourceVolumeOptions: - accessMode: ReadOnlyMany - cacheAccessMode: ReadWriteMany - cacheStorageClassName: ocs-storagecluster-cephfs - storageClassName: ocs-storagecluster-cephfs-shallow - ocs-storagecluster-ceph-rbd: - sourceVolumeOptions: - storageClassName: ocs-storagecluster-ceph-rbd - cacheStorageClassName: ocs-storagecluster-ceph-rbd - destinationVolumeOptions: - storageClassName: ocs-storagecluster-ceph-rbd - cacheStorageClassName: ocs-storagecluster-ceph-rbd -``` -Note: The CephFS ShallowCopy feature can only be used for datamover backup operation, the ShallowCopy volume options are not supported for restore. - -- Now follow the backup and restore steps from the previous example diff --git a/docs/oadp_cheat_sheet.md b/docs/oadp_cheat_sheet.md index c8e34879c3..ea2671b701 100644 --- a/docs/oadp_cheat_sheet.md +++ b/docs/oadp_cheat_sheet.md @@ -190,55 +190,3 @@ Resource List: Velero-Native Snapshots: ``` - - - -## Data Mover (OADP 1.2 or below) Specific commands - -#### Clean up datamover related objects -**WARNING** Do not run this command on production systems. This is a remove *ALL* command. -``` -oc delete vsb -A --all; oc delete vsr -A --all; oc delete vsc -A --all; oc delete vs -A --all; oc delete replicationsources.volsync.backube -A --all; oc delete replicationdestination.volsync.backube -A --all -``` -Details: -``` ---all=false: - Delete all resources, in the namespace of the specified resource types. -``` -``` --A, --all-namespaces=false: - If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even - if specified with --namespace. -``` -A safer to execute a cleanup is to limit the delete to a namespace or a specific object. -* namespaced objecs: VSB, VSR, VSC, VS -* protected namespace (openshift-adp): replicationsources.volsync.backube, replicationdestination.volsync.backube - -``` -oc delete vsb -n --all -``` - - - -#### Remove finalizers -``` -for i in `oc get vsc -A -o custom-columns=NAME:.metadata.name`; do echo $i; oc patch vsc $i -p '{"metadata":{"finalizers":null}}' --type=merge; done -``` - -#### Watch datamover resources while backup in progress -``` -curl -o ~/.local/bin/datamover_resources.sh https://raw.githubusercontent.com/openshift/oadp-operator/oadp-dev/docs/examples/datamover_resources.sh -``` -###### Backups -``` -watch -n 5 datamover_resources.sh -b -d -``` -###### Restore -``` -watch -n 5 datamover_resources.sh -r -d -``` - -#### Watch the VSM plugin logs -``` -oc logs -f deployment.apps/volume-snapshot-mover -n openshift-adp -``` diff --git a/docs/upgrade_1-3_to_1-4.md b/docs/upgrade_1-3_to_1-4.md index a61724f335..850f25dc85 100644 --- a/docs/upgrade_1-3_to_1-4.md +++ b/docs/upgrade_1-3_to_1-4.md @@ -1,6 +1,6 @@ # Upgrading from OADP 1.3 -> **NOTE:** Always upgrade to next minor version, do NOT skip versions. To update to higher version, please upgrade one channel at a time. Example: to upgrade from 1.1 to 1.3, upgrade first to 1.2, then to 1.3. +> **NOTE:** Always upgrade to next minor version, do NOT skip versions. To update to higher version, please upgrade one channel at a time. Example: to upgrade from 1.3 to 1.5, upgrade first to 1.4, then to 1.5. ## Changes from OADP 1.3 to 1.4 diff --git a/docs/upgrade_1-4_to_1-5.md b/docs/upgrade_1-4_to_1-5.md index 44f3167aec..91a8abebec 100644 --- a/docs/upgrade_1-4_to_1-5.md +++ b/docs/upgrade_1-4_to_1-5.md @@ -1,6 +1,6 @@ # Upgrading from OADP 1.4 -> **NOTE:** Always upgrade to next minor version, do NOT skip versions. To update to higher version, please upgrade one channel at a time. Example: to upgrade from 1.1 to 1.3, upgrade first to 1.2, then to 1.3. +> **NOTE:** Always upgrade to next minor version, do NOT skip versions. To update to higher version, please upgrade one channel at a time. Example: to upgrade from 1.3 to 1.5, upgrade first to 1.4, then to 1.5. ## Changes from OADP 1.4 to 1.5