|
7 | 7 | [id='migration-creating-migration-plan-cam_{context}'] |
8 | 8 | = Creating a migration plan in the {mtc-short} web console |
9 | 9 |
|
10 | | -You can create a migration plan in the for the {mtc-full} ({mtc-short}) web console. |
| 10 | +You can create a migration plan in the {mtc-full} ({mtc-short}) web console. |
| 11 | + |
| 12 | +You can use _direct image migration_ and _direct volume migration_ to migrate images or volumes directly from the source cluster to the target cluster. Direct migration improves performance significantly. |
11 | 13 |
|
12 | 14 | .Prerequisites |
13 | 15 |
|
14 | | -* The source and target clusters must have network access to each other and to the replication repository. |
15 | | -* If you use snapshots to copy data, the source and target clusters must run on the same cloud provider (AWS, GCP, or Azure) and be located in the same region. |
| 16 | +* You must add source and target clusters and a replication repository to the {mtc-short} web console. |
| 17 | +* The clusters must have network access to each other. |
| 18 | +* The clusters must have network access to the replication repository. |
| 19 | +* The clusters must be able to communicate using OpenShift routes on port 443. |
| 20 | +* The clusters must have no `Critical` conditions. |
| 21 | +* The clusters must be in a `Ready` state. |
| 22 | +* The migration plan name must not exceed 253 lower-case alphanumeric characters (`a-z, 0-9`) and must not contain spaces or underscores (`_`). |
| 23 | +* PV `Move` copy method: The clusters must have network access to the remote volume. |
| 24 | +* PV `Snapshot` copy method: |
| 25 | +** The clusters must have the same cloud provider (AWS, GCP, or Azure). |
| 26 | +** The clusters must be located in the same geographic region. |
| 27 | +** The storage class must be the same on the source and target clusters. |
| 28 | + |
| 29 | +* Direct image migration: |
| 30 | +** The source cluster must have its internal registry exposed to external traffic. |
| 31 | +** The exposed registry route of the source cluster must be added to the cluster configuration using the {mtc-short} web console or with the `exposedRegistryPath` parameter in the `MigCluster` CR manifest. |
| 32 | + |
| 33 | +* Direct volume migration: |
| 34 | +** The PVs to be migrated must be valid. |
| 35 | +** The PVs must be in a `bound` state. |
| 36 | +** The PV migration method must be `Copy` and the copy method must be `filesystem`. |
16 | 37 |
|
17 | 38 | .Procedure |
18 | 39 |
|
19 | 40 | . In the {mtc-short} web console, click *Migration plans*. |
20 | | -. Click *Add migration plan* to launch the Migration Plan wizard. |
21 | | -. In the *General* screen, enter the *Plan name*. |
22 | | -. Select a source cluster, a target cluster, and a replication repository and then click *Next*. |
23 | | -. In the *Namespaces* screen, select the projects to be migrated and then click *Next*. |
| 41 | +. Click *Add migration plan*. |
| 42 | +. Enter the *Plan name* and click *Next*. |
| 43 | +. Select a *Source cluster*. |
| 44 | +. Select a *Target cluster*. |
| 45 | +. Select a *Replication repository*. |
| 46 | +. Select the projects to be migrated and click *Next*. |
| 47 | +. Select a *Source cluster*, a *Target cluster*, and a *Repository*, and click *Next*. |
| 48 | +. In the *Namespaces* screen, select the projects to be migrated and click *Next*. |
24 | 49 | . In the *Persistent volumes* screen, click a *Migration type* for each PV: |
| 50 | + |
25 | 51 | * The *Copy* option copies the data from the PV of a source cluster to the replication repository and then restores it on a newly created PV, with similar characteristics, in the target cluster. |
26 | | -+ |
27 | | -If you specified a route to an image registry when you added the source cluster to the web console, you can migrate images directly from the source cluster to the target cluster. |
28 | 52 | * The *Move* option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters. |
| 53 | + |
29 | 54 | . Click *Next*. |
30 | | -. In the *Copy options* screen, click a *Copy method* for each PV: |
31 | | -* The *Snapshot copy* option backs up and restores the disk using the cloud provider's snapshot functionality. Copying snapshots is faster than copying the file system. |
32 | | -+ |
33 | | -[NOTE] |
34 | | -==== |
35 | | -The storage and clusters must be in the same region and the storage classes must be compatible. |
36 | | -==== |
37 | | -* The *Filesystem copy* option backs up the files on the source cluster and restores them on the target cluster. |
38 | | -. Select *Verify copy* if you want to verify data migrated with *Filesystem copy*. Data is verified by generating a checksum for each source file and checking the checksum after restoration. This option significantly reduces performance. |
39 | | -. Select a *Target storage class* for each PV. |
| 55 | +. In the *Copy options* screen, select a *Copy method* for each PV: |
| 56 | + |
| 57 | +* *Snapshot copy* backs up and restores data using the cloud provider's snapshot functionality. It is significantly faster than *Filesystem copy*. |
| 58 | +* *Filesystem copy* backs up the files on the source cluster and restores them on the target cluster. |
| 59 | + |
| 60 | +. You can select *Verify copy* to verify data migrated with *Filesystem copy*. Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance. |
| 61 | + |
| 62 | +. Select a *Target storage class*. |
40 | 63 | + |
41 | 64 | You can change the storage class of data migrated with *Filesystem copy*. |
42 | 65 | . Click *Next*. |
|
0 commit comments