Skip to content

Commit cd19d6d

Browse files
committed
BZ1964305: Restic supplemental groups workaround
1 parent a36c98b commit cd19d6d

File tree

1 file changed

+13
-6
lines changed

1 file changed

+13
-6
lines changed

modules/migration-known-issues.adoc

Lines changed: 13 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -23,12 +23,19 @@ These annotations preserve the UID range, ensuring that the containers retain th
2323

2424
* If a migration fails, the migration plan does not retain custom PV settings for quiesced pods. You must manually roll back the migration, delete the migration plan, and create a new migration plan with your PV settings. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1784899[*BZ#1784899*])
2525

26-
* If a large migration fails because Restic times out, you can increase the `restic_timeout` parameter value (default: `1h`) in the `MigrationController` CR.
26+
* If a large migration fails because Restic times out, you can increase the `restic_timeout` parameter value (default: `1h`) in the `MigrationController` custom resource (CR) manifest.
2727

2828
* If you select the data verification option for PVs that are migrated with the file system copy method, performance is significantly slower.
2929

30-
ifeval::["{mtc-version}" < "1.4"]
31-
* If you are migrating data from NFS storage and `root_squash` is enabled, `Restic` maps to `nfsnobody`. The migration fails and a permission error is displayed in the `Restic` pod log. You can resolve this issue by creating a supplemental group for Restic. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1873641[*BZ#1873641*])
32-
33-
* If Velero has an invalid `BackupStorageLocation` during start-up, it will crash-loop until the invalid `BackupStorageLocation` is removed. This scenario is triggered by incorrect credentials, a non-existent S3 bucket, and other configuration errors. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1881707[*BZ#1881707*])
34-
endif::[]
30+
* If you are migrating data from NFS storage and `root_squash` is enabled, `Restic` maps to `nfsnobody`. The migration fails and a permission error is displayed in the `Restic` pod log. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1873641[*BZ#1873641*])
31+
+
32+
You can resolve this issue by adding supplemental groups for `Restic` to the `MigrationController` CR manifest:
33+
+
34+
[source,yaml]
35+
----
36+
spec:
37+
...
38+
restic_supplemental_groups:
39+
- 5555
40+
- 6666
41+
----

0 commit comments

Comments
 (0)