Skip to content

Prabha PR fix #97474

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ The default plugins enable Velero to integrate with certain cloud providers and
include::modules/oadp-features.adoc[leveloffset=+1]
include::modules/oadp-plugins.adoc[leveloffset=+1]
include::modules/oadp-configuring-velero-plugins.adoc[leveloffset=+1]
include::modules/oadp-plugins-receiving-eof-message.adoc[leveloffset=+2]
ifndef::openshift-rosa,openshift-rosa-hcp[]
include::modules/oadp-supported-architecture.adoc[leveloffset=+1]
endif::openshift-rosa,openshift-rosa-hcp[]
Expand All @@ -33,9 +34,12 @@ include::modules/oadp-ibm-z-test-support.adoc[leveloffset=+2]
include::modules/oadp-ibm-power-and-z-known-issues.adoc[leveloffset=+3]
endif::openshift-rosa,openshift-rosa-hcp[]

include::modules/oadp-fips.adoc[leveloffset=+1]
include::modules/oadp-features-plugins-known-issues.adoc[leveloffset=+1]

include::modules/velero-plugin-panic.adoc[leveloffset=+2]

include::modules/avoiding-the-velero-plugin-panic-error.adoc[leveloffset=+1]
include::modules/workaround-for-openshift-adp-controller-segmentation-fault.adoc[leveloffset=+1]
include::modules/openshift-adp-controller-manager-seg-fault.adoc[leveloffset=+2]

include::modules/oadp-fips.adoc[leveloffset=+1]

:!oadp-features-plugins:
62 changes: 1 addition & 61 deletions modules/oadp-features-plugins-known-issues.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,64 +7,4 @@
[id="oadp-features-plugins-known-issues_{context}"]
= OADP plugins known issues

The following section describes known issues in {oadp-first} plugins:

[id="velero-plugin-panic_{context}"]
== Velero plugin panics during imagestream backups due to a missing secret

When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller, meaning the DPA reconciliation does not create the relevant `oadp-<BSL Name>-<BSL Provider>-registry-secret`.

When the backup is run, the OpenShift Velero plugin panics on the imagestream backup, with a panic error:

[source,terminal]
----
024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item"
backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io,
namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked:
runtime error: index out of range with length 1, stack trace: goroutine 94…
----

[id="velero-plugin-panic-workaround_{context}"]
=== Workaround to avoid the panic error

To avoid the Velero plugin panic error, perform the following steps:

. Label the custom BSL with the relevant label
+
[source,terminal]
----
$ oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl
----

. After the BSL is labeled, wait until the DPA reconciles.
+
[NOTE]
====
You can force the reconciliation by making any minor change to the DPA itself.
====

. When the DPA reconciles, confirm that the relevant `oadp-<BSL Name>-<BSL Provider>-registry-secret` has been created and that the correct registry data has been populated into it:
+
[source,terminal]
----
$ oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'
----


[id="openshift-adp-controller-manager-seg-fault_{context}"]
== OpenShift ADP Controller segmentation fault

If you configure a DPA with both `cloudstorage` and `restic` enabled, the `openshift-adp-controller-manager` pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault.

You can have either `velero` or `cloudstorage` defined, because they are mutually exclusive fields.

* If you have both `velero` and `cloudstorage` defined, the `openshift-adp-controller-manager` fails.
* If you have neither `velero` nor `cloudstorage` defined, the `openshift-adp-controller-manager` fails.

For more information about this issue, see link:https://issues.redhat.com/browse/OADP-1054[OADP-1054].


[id="openshift-adp-controller-manager-seg-fault-workaround_{context}"]
=== OpenShift ADP Controller segmentation fault workaround

You must define either `velero` or `cloudstorage` when you configure a DPA. If you define both APIs in your DPA, the `openshift-adp-controller-manager` pod fails with a crash loop segmentation fault.
The following section describes known issues in {oadp-first} plugins:
24 changes: 24 additions & 0 deletions modules/openshift-adp-controller-manager-seg-fault.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
// Module included in the following assemblies:
// oadp-features-plugins-known-issues
// * backup_and_restore/application_backup_and_restore/oadp-features-plugins.adoc
// * backup_and_restore/application_backup_and_restore/troubleshooting.adoc

:_mod-docs-content-type: CONCEPT
[id="openshift-adp-controller-manager-seg-fault_{context}"]
= OpenShift ADP Controller segmentation fault

[role="_abstract"]
If you configure a DPA with both `cloudstorage` and `restic` enabled, the `openshift-adp-controller-manager` pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault.

You can have either `velero` or `cloudstorage` defined, because they are mutually exclusive fields.

* If you have both `velero` and `cloudstorage` defined, the `openshift-adp-controller-manager` fails.
* If you have neither `velero` nor `cloudstorage` defined, the `openshift-adp-controller-manager` fails.

For more information about this issue, see link:https://issues.redhat.com/browse/OADP-1054[OADP-1054].


[id="openshift-adp-controller-manager-seg-fault-workaround_{context}"]
== OpenShift ADP Controller segmentation fault workaround

You must define either `velero` or `cloudstorage` when you configure a DPA. If you define both APIs in your DPA, the `openshift-adp-controller-manager` pod fails with a crash loop segmentation fault.
47 changes: 47 additions & 0 deletions modules/velero-plugin-panic.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
// Module included in the following assemblies:
// oadp-features-plugins-known-issues
// * backup_and_restore/application_backup_and_restore/oadp-features-plugins.adoc
// * backup_and_restore/application_backup_and_restore/troubleshooting.adoc

:_mod-docs-content-type: CONCEPT
[id="velero-plugin-panic_{context}"]
= Velero plugin panics during imagestream backups due to a missing secret

[role="_abstract"]
When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller, meaning the DPA reconciliation does not create the relevant `oadp-<bsl_name>-<bsl_provider>-registry-secret`.

When the backup is run, the OpenShift Velero plugin panics on the imagestream backup, with the following panic error:

[source,terminal]
----
024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item"
backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io,
namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked:
runtime error: index out of range with length 1, stack trace: goroutine 94…
----

[id="velero-plugin-panic-workaround_{context}"]
== Workaround to avoid the panic error

To avoid the Velero plugin panic error, perform the following steps:

. Label the custom BSL with the relevant label:
+
[source,terminal]
----
$ oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl
----

. After the BSL is labeled, wait until the DPA reconciles.
+
[NOTE]
====
You can force the reconciliation by making any minor change to the DPA itself.
====

. When the DPA reconciles, confirm that the relevant `oadp-<bsl_name>-<bsl_provider>-registry-secret` has been created and that the correct registry data has been populated into it:
+
[source,terminal]
----
$ oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'
----