Skip to content

Commit 2ed5ffa

Browse files
authored
Merge pull request #32517 from apinnick/bz1956738-velero-error-message
bz1956738-bsl-error-velero-log (MTC 1.5.0)
2 parents 4c79f5f + 2a52e14 commit 2ed5ffa

File tree

1 file changed

+45
-31
lines changed

1 file changed

+45
-31
lines changed

modules/migration-error-messages.adoc

Lines changed: 45 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -12,42 +12,53 @@ This section describes common error messages you might encounter with the {mtc-f
1212
[id="ca-certificate-error-in-console_{context}"]
1313
== CA certificate error in the {mtc-short} console
1414

15-
If a `CA certificate error` message is displayed the first time you try to access the {mtc-short} console, the likely cause is the use of self-signed CA certificates in one of the clusters.
15+
If the {mtc-short} console displays a `CA certificate error` message the first time you try to access it, the likely cause is that a cluster uses self-signed CA certificates.
1616

17-
To resolve this issue, navigate to the `oauth-authorization-server` URL displayed in the error message and accept the certificate. To resolve this issue permanently, add the certificate to the trust store of your web browser.
17+
Navigate to the `oauth-authorization-server` URL in the error message and accept the certificate. To resolve this issue permanently, install the certificate authority so that it is trusted.
1818

19-
If an `Unauthorized` message is displayed after you have accepted the certificate, navigate to the {mtc-short} console and refresh the web page.
19+
If the browser displays an `Unauthorized` message after you have accepted the CA certificate, navigate to the {mtc-short} console and then refresh the web page.
2020

2121
[id="oauth-timeout-error-in-console_{context}"]
2222
== OAuth timeout error in the {mtc-short} console
2323

24-
If a `connection has timed out` message is displayed in the {mtc-short} console after you have accepted a self-signed certificate, the causes are likely to be the following:
24+
If the {mtc-short} console displays a `connection has timed out` message after you have accepted a self-signed certificate, the cause is likely to be one of the following:
2525

2626
* Interrupted network access to the OAuth server
2727
* Interrupted network access to the {product-title} console
28-
* Proxy configuration that blocks access to the `oauth-authorization-server` URL. See link:https://access.redhat.com/solutions/5514491[MTC console inaccessible because of OAuth timeout error] for details.
28+
* Proxy configuration blocking access to the OAuth server. See link:https://access.redhat.com/solutions/5514491[MTC console inaccessible because of OAuth timeout error] for details.
2929

30-
You can determine the cause of the timeout.
30+
To determine the cause:
3131

32-
.Procedure
32+
* Inspect the {mtc-short} console web page with a browser web inspector.
33+
* Check the `Migration UI` pod log for errors.
3334

34-
. Navigate to the {mtc-short} console and inspect the elements with the browser web inspector.
35-
. Check the `MigrationUI` pod log:
36-
+
35+
[id="backup-storage-location-errors-in-velero-pod-log_{context}"]
36+
== Backup storage location errors in the Velero pod log
37+
38+
If a `Velero` `Backup` custom resource contains a reference to a backup storage location (BSL) that does not exist, the `Velero` pod log might display the following error messages:
39+
40+
.BSL error messages
3741
[source,terminal]
3842
----
39-
$ oc logs <MigrationUI_Pod> -n openshift-migration
43+
Error checking repository for stale locks
44+
45+
Error getting backup storage location: backupstoragelocation.velero.io \"my-bsl\" not found
4046
----
4147

42-
[id="podvolumebackups-timeout-error-in-velero-log_{context}"]
43-
== PodVolumeBackups timeout error in Velero pod log
48+
You can ignore these error messages. A missing BSL cannot cause a migration to fail.
49+
50+
[id="podvolumebackups-timeout-error-in-velero-log_{context}""]
51+
== Pod volume backup timeout error in the Velero pod log
4452

45-
If a migration fails because Restic times out, the following error is displayed in the `Velero` pod log.
53+
If a migration fails because `Restic` times out, the `Velero` pod log displays the following error:
4654

47-
.Example output
55+
.Pod volume backup timeout error
4856
[source,terminal]
4957
----
50-
level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" error.file="/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165" error.function="github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes" group=v1
58+
level=error msg="Error backing up item" backup=velero/monitoring error="timed out
59+
waiting for all PodVolumeBackups to complete" error.file="/go/src/github.com/
60+
heptio/velero/pkg/restic/backupper.go:165" error.function="github.com/heptio/
61+
velero/pkg/restic.(*backupper).BackupPodVolumes" group=v1
5162
----
5263

5364
The default value of `restic_timeout` is one hour. You can increase this parameter for large migrations, keeping in mind that a higher value may delay the return of error messages.
@@ -68,12 +79,12 @@ spec:
6879

6980
. Click *Save*.
7081

71-
[id="resticverifyerrors-in-the-migmigration-custom-resource_{context}"]
72-
== ResticVerifyErrors in the MigMigration custom resource
82+
[id="resticverifyerrors-in-the-migmigration-custom-resource_{context}""]
83+
== Restic verification errors in the MigMigration custom resource
7384

74-
If data verification fails when migrating a persistent volume with the file system data copy method, the following error is displayed in the `MigMigration` CR.
85+
If data verification fails when migrating a persistent volume with the file system data copy method, the `MigMigration` CR displays the following error:
7586

76-
.Example output
87+
.MigMigration CR status
7788
[source,yaml]
7889
----
7990
status:
@@ -94,7 +105,7 @@ status:
94105
A data verification error does not cause the migration process to fail.
95106
====
96107

97-
You can check the `Restore` CR to identify the source of the data verification error.
108+
You can check the `Restore` CR to troubleshoot the data verification error.
98109

99110
.Procedure
100111

@@ -108,7 +119,7 @@ $ oc describe <registry-example-migration-rvwcm> -n openshift-migration
108119
+
109120
The output identifies the persistent volume with `PodVolumeRestore` errors.
110121
+
111-
.Example output
122+
.Restore CR with pod volume restore error
112123
[source,yaml]
113124
----
114125
status:
@@ -132,7 +143,7 @@ $ oc describe <migration-example-rvwcm-98t49>
132143
+
133144
The output identifies the `Restic` pod that logged the errors.
134145
+
135-
.Example output
146+
.PodVolumeRestore CR with Restic pod error
136147
[source,yaml]
137148
----
138149
completionTimestamp: 2020-05-01T20:49:12Z
@@ -149,23 +160,27 @@ The output identifies the `Restic` pod that logged the errors.
149160
$ oc logs -f <restic-nr2v5>
150161
----
151162

152-
ifeval::["{mtc-version}" < "1.3"]
153163
[id="restic-permission-denied-error-for-nfs-storage_{context}"]
154-
== `restic: permission denied` error for NFS storage
164+
== Restic permission error when migrating from NFS storage with root_squash enabled
155165

156-
If you are migrating data from NFS storage and `root_squash` is enabled, `Restic` maps to `nfsnobody` and does not have permission to perform the migration. The following error is displayed in the `Restic` pod log.
166+
If you are migrating data from NFS storage and `root_squash` is enabled, `Restic` maps to `nfsnobody` and does not have permission to perform the migration. The `Restic` pod log displays the following error:
157167

158-
.Example output
168+
.Restic permission error
159169
[source,terminal]
160170
----
161-
backup=openshift-migration/<backup_id> controller=pod-volume-backup error="fork/exec /usr/bin/restic: permission denied" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280" error.function="github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup" logSource="pkg/controller/pod_volume_backup_controller.go:280" name=<backup_id> namespace=openshift-migration
171+
backup=openshift-migration/<backup_id> controller=pod-volume-backup error="fork/exec
172+
/usr/bin/restic: permission denied" error.file="/go/src/github.com/vmware-tanzu/
173+
velero/pkg/controller/pod_volume_backup_controller.go:280" error.function=
174+
"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup"
175+
logSource="pkg/controller/pod_volume_backup_controller.go:280" name=<backup_id>
176+
namespace=openshift-migration
162177
----
163178

164-
You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the `MigrationController` CR manifest.
179+
You can resolve this issue by creating a supplemental group for `Restic` and adding the group ID to the `MigrationController` CR manifest.
165180

166181
.Procedure
167182

168-
. Create a supplemental group for Restic on the NFS storage.
183+
. Create a supplemental group for `Restic` on the NFS storage.
169184
. Set the `setgid` bit on the NFS directories so that group ownership is inherited.
170185
. Add the `restic_supplemental_groups` parameter to the `MigrationController` CR manifest on the source and target clusters:
171186
+
@@ -177,4 +192,3 @@ spec:
177192
<1> Specify the supplemental group ID.
178193

179194
. Wait for the `Restic` pods to restart so that the changes are applied.
180-
endif::[]

0 commit comments

Comments
 (0)