diff --git a/backup_and_restore/application_backup_and_restore/oadp-features-plugins.adoc b/backup_and_restore/application_backup_and_restore/oadp-features-plugins.adoc index 8a18b300ec59..1dfc62b1242b 100644 --- a/backup_and_restore/application_backup_and_restore/oadp-features-plugins.adoc +++ b/backup_and_restore/application_backup_and_restore/oadp-features-plugins.adoc @@ -13,6 +13,7 @@ The default plugins enable Velero to integrate with certain cloud providers and include::modules/oadp-features.adoc[leveloffset=+1] include::modules/oadp-plugins.adoc[leveloffset=+1] include::modules/oadp-configuring-velero-plugins.adoc[leveloffset=+1] +include::modules/oadp-plugins-receiving-eof-message.adoc[leveloffset=+2] ifndef::openshift-rosa,openshift-rosa-hcp[] include::modules/oadp-supported-architecture.adoc[leveloffset=+1] endif::openshift-rosa,openshift-rosa-hcp[] @@ -33,9 +34,12 @@ include::modules/oadp-ibm-z-test-support.adoc[leveloffset=+2] include::modules/oadp-ibm-power-and-z-known-issues.adoc[leveloffset=+3] endif::openshift-rosa,openshift-rosa-hcp[] -include::modules/oadp-fips.adoc[leveloffset=+1] +include::modules/oadp-features-plugins-known-issues.adoc[leveloffset=+1] + +include::modules/velero-plugin-panic.adoc[leveloffset=+2] -include::modules/avoiding-the-velero-plugin-panic-error.adoc[leveloffset=+1] -include::modules/workaround-for-openshift-adp-controller-segmentation-fault.adoc[leveloffset=+1] +include::modules/openshift-adp-controller-manager-seg-fault.adoc[leveloffset=+2] + +include::modules/oadp-fips.adoc[leveloffset=+1] :!oadp-features-plugins: diff --git a/modules/oadp-features-plugins-known-issues.adoc b/modules/oadp-features-plugins-known-issues.adoc index a667f24d5376..b4032c5e3881 100644 --- a/modules/oadp-features-plugins-known-issues.adoc +++ b/modules/oadp-features-plugins-known-issues.adoc @@ -7,64 +7,4 @@ [id="oadp-features-plugins-known-issues_{context}"] = OADP plugins known issues -The following section describes known issues in {oadp-first} plugins: - -[id="velero-plugin-panic_{context}"] -== Velero plugin panics during imagestream backups due to a missing secret - -When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller, meaning the DPA reconciliation does not create the relevant `oadp---registry-secret`. - -When the backup is run, the OpenShift Velero plugin panics on the imagestream backup, with a panic error: - -[source,terminal] ----- -024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item" -backup=openshift-adp/ error="error executing custom action (groupResource=imagestreams.image.openshift.io, -namespace=, name=postgres): rpc error: code = Aborted desc = plugin panicked: -runtime error: index out of range with length 1, stack trace: goroutine 94… ----- - -[id="velero-plugin-panic-workaround_{context}"] -=== Workaround to avoid the panic error - -To avoid the Velero plugin panic error, perform the following steps: - -. Label the custom BSL with the relevant label -+ -[source,terminal] ----- -$ oc label backupstoragelocations.velero.io app.kubernetes.io/component=bsl ----- - -. After the BSL is labeled, wait until the DPA reconciles. -+ -[NOTE] -==== -You can force the reconciliation by making any minor change to the DPA itself. -==== - -. When the DPA reconciles, confirm that the relevant `oadp---registry-secret` has been created and that the correct registry data has been populated into it: -+ -[source,terminal] ----- -$ oc -n openshift-adp get secret/oadp---registry-secret -o json | jq -r '.data' ----- - - -[id="openshift-adp-controller-manager-seg-fault_{context}"] -== OpenShift ADP Controller segmentation fault - -If you configure a DPA with both `cloudstorage` and `restic` enabled, the `openshift-adp-controller-manager` pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault. - -You can have either `velero` or `cloudstorage` defined, because they are mutually exclusive fields. - -* If you have both `velero` and `cloudstorage` defined, the `openshift-adp-controller-manager` fails. -* If you have neither `velero` nor `cloudstorage` defined, the `openshift-adp-controller-manager` fails. - -For more information about this issue, see link:https://issues.redhat.com/browse/OADP-1054[OADP-1054]. - - -[id="openshift-adp-controller-manager-seg-fault-workaround_{context}"] -=== OpenShift ADP Controller segmentation fault workaround - -You must define either `velero` or `cloudstorage` when you configure a DPA. If you define both APIs in your DPA, the `openshift-adp-controller-manager` pod fails with a crash loop segmentation fault. +The following section describes known issues in {oadp-first} plugins: \ No newline at end of file diff --git a/modules/openshift-adp-controller-manager-seg-fault.adoc b/modules/openshift-adp-controller-manager-seg-fault.adoc new file mode 100644 index 000000000000..b0480f007074 --- /dev/null +++ b/modules/openshift-adp-controller-manager-seg-fault.adoc @@ -0,0 +1,24 @@ +// Module included in the following assemblies: +// oadp-features-plugins-known-issues +// * backup_and_restore/application_backup_and_restore/oadp-features-plugins.adoc +// * backup_and_restore/application_backup_and_restore/troubleshooting.adoc + +:_mod-docs-content-type: CONCEPT +[id="openshift-adp-controller-manager-seg-fault_{context}"] += OpenShift ADP Controller segmentation fault + +[role="_abstract"] +If you configure a DPA with both `cloudstorage` and `restic` enabled, the `openshift-adp-controller-manager` pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault. + +You can have either `velero` or `cloudstorage` defined, because they are mutually exclusive fields. + +* If you have both `velero` and `cloudstorage` defined, the `openshift-adp-controller-manager` fails. +* If you have neither `velero` nor `cloudstorage` defined, the `openshift-adp-controller-manager` fails. + +For more information about this issue, see link:https://issues.redhat.com/browse/OADP-1054[OADP-1054]. + + +[id="openshift-adp-controller-manager-seg-fault-workaround_{context}"] +== OpenShift ADP Controller segmentation fault workaround + +You must define either `velero` or `cloudstorage` when you configure a DPA. If you define both APIs in your DPA, the `openshift-adp-controller-manager` pod fails with a crash loop segmentation fault. \ No newline at end of file diff --git a/modules/velero-plugin-panic.adoc b/modules/velero-plugin-panic.adoc new file mode 100644 index 000000000000..d0eeaa27a0e1 --- /dev/null +++ b/modules/velero-plugin-panic.adoc @@ -0,0 +1,47 @@ +// Module included in the following assemblies: +// oadp-features-plugins-known-issues +// * backup_and_restore/application_backup_and_restore/oadp-features-plugins.adoc +// * backup_and_restore/application_backup_and_restore/troubleshooting.adoc + +:_mod-docs-content-type: CONCEPT +[id="velero-plugin-panic_{context}"] += Velero plugin panics during imagestream backups due to a missing secret + +[role="_abstract"] +When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller, meaning the DPA reconciliation does not create the relevant `oadp---registry-secret`. + +When the backup is run, the OpenShift Velero plugin panics on the imagestream backup, with the following panic error: + +[source,terminal] +---- +024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item" +backup=openshift-adp/ error="error executing custom action (groupResource=imagestreams.image.openshift.io, +namespace=, name=postgres): rpc error: code = Aborted desc = plugin panicked: +runtime error: index out of range with length 1, stack trace: goroutine 94… +---- + +[id="velero-plugin-panic-workaround_{context}"] +== Workaround to avoid the panic error + +To avoid the Velero plugin panic error, perform the following steps: + +. Label the custom BSL with the relevant label: ++ +[source,terminal] +---- +$ oc label backupstoragelocations.velero.io app.kubernetes.io/component=bsl +---- + +. After the BSL is labeled, wait until the DPA reconciles. ++ +[NOTE] +==== +You can force the reconciliation by making any minor change to the DPA itself. +==== + +. When the DPA reconciles, confirm that the relevant `oadp---registry-secret` has been created and that the correct registry data has been populated into it: ++ +[source,terminal] +---- +$ oc -n openshift-adp get secret/oadp---registry-secret -o json | jq -r '.data' +---- \ No newline at end of file