You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* *Cluster name*: May contain lower-case letters (`a-z`) and numbers (`0-9`). Must not contain spaces or international characters.
41
41
* *Url*: URL of the cluster's API server, for example, `\https://<master1.example.com>:8443`.
42
42
* *Service account token*: String that you obtained from the source cluster.
43
+
* *Exposed route to image registry*: Optional. You can specify a route to the image registry of your source cluster to enable direct migration for images, for example, `docker-registry-default.apps.cluster.com`.
44
+
+
45
+
Direct migration is much faster than migration with a replication repository.
46
+
43
47
* *Azure cluster*: Optional. Select it if you are using Azure snapshots to copy your data.
44
48
* *Azure resource group*: This field appears if *Azure cluster* is checked.
45
49
* If you use a custom CA bundle, click *Browse* and browse to the CA bundle file.
Copy file name to clipboardExpand all lines: modules/migration-creating-migration-plan-cam.adoc
+25-14Lines changed: 25 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,37 +27,48 @@ The *Plan name* can contain up to 253 lower-case alphanumeric characters (`a-z,
27
27
. Select the projects to be migrated and click *Next*.
28
28
. Select *Copy* or *Move* for the persistent volume (PV):
29
29
30
-
* *Copy* copies the data in a source cluster's PV to the replication repository and then restores it on a newly created PV, with similar characteristics, in the target cluster.
30
+
. Select a *Source cluster*, a *Target cluster*, and a *Repository*, and click *Next*.
31
+
. In the *Namespaces* screen, select the projects to be migrated and click *Next*.
32
+
. In the *Persistent volumes* screen, select *Copy* or *Move* for the PVs:
33
+
34
+
* *Copy* copies the data from the PV of a source cluster to the replication repository and then restores it on a newly created PV, with similar characteristics, in the target cluster.
31
35
+
32
-
Optional: You can verify data copied with the file system method by selecting *Verify copy*. This option generates a checksum for each source file and checks it after restoration. The operation significantly reduces performance.
36
+
If you specified a route to an image registry when you added the source cluster to the web console, you can migrate images directly from the source cluster to the target cluster.
33
37
34
38
* *Move* unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.
35
39
36
40
. Click *Next*.
37
-
. Select a *Copy method* for the PVs:
38
-
* *Snapshot* backs up and restores the disk using the cloud provider's snapshot functionality. It is significantly faster than the file system copy method.
41
+
42
+
. In the *Copy options* screen, select a *Copy method* for the PVs:
43
+
44
+
* *Snapshot copy* backs up and restores the disk using the cloud provider's snapshot functionality. It is significantly faster than *Filesystem copy*.
39
45
+
40
46
[NOTE]
41
47
====
42
-
The storage and clusters must be in the same region and have the same storage class.
48
+
The storage and clusters must be in the same region and the storage classes must be compatible.
43
49
====
44
50
45
-
* *Filesystem* copies the data files from the source disk to a newly created target disk.
51
+
* *Filesystem copy* backs up the files on the source cluster and restores them on the target cluster.
52
+
53
+
. You can select *Verify copy* to verify data migrated with *Filesystem copy*. Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance.
46
54
47
-
. Select a *Storage class* for the PVs.
55
+
. Select a *Target storage class*.
48
56
+
49
-
If you selected the *Filesystem* copy method, you can change the storage class during migration, for example, from Red Hat Gluster Storage or NFS storage to Red Hat Ceph Storage.
57
+
If you selected *Filesystem copy*, you can change the storage class during migration, for example, from Red Hat Gluster Storage or NFS storage to Red Hat Ceph Storage.
50
58
51
59
. Click *Next*.
52
-
. If you want to add a migration hook, click *Add Hook* and perform the following steps:
60
+
61
+
. In the *Migration options* screen, the *Use direct image migration* and *Use direct PV migration for filesystem copies* options are selected if you specified an image registry route for the source cluster.
62
+
+
63
+
Direct migration is much faster than migrating files and images with a replication repository.
64
+
65
+
. If you want to add a migration hook, click *Add Hook* and perform the following steps for each migration:
53
66
54
67
.. Specify the name of the hook.
55
-
.. Select *Ansible playbook* to use your own playbook or *Custom container image* for a hook written in another language.
68
+
.. Select an *Ansible playbook* or a *Custom container image* for a hook written in another language.
56
69
.. Click *Browse* to upload the playbook.
57
-
.. Optional: If you are not using the default Ansible runtime image, specify your custom Ansible image.
58
-
.. Specify the cluster on which you want the hook to run.
59
-
.. Specify the service account name.
60
-
.. Specify the namespace.
70
+
.. Optional: If you are not using the default Ansible runtime image, specify a custom Ansible image.
71
+
.. Specify the cluster on which you want the hook to run, the service account name, and the namespace.
61
72
.. Select the migration step at which you want the hook to run:
62
73
63
74
* PreBackup: Before backup tasks are started on the source cluster
0 commit comments