Skip to content

Latest commit

 

History

History
182 lines (110 loc) · 9.71 KB

File metadata and controls

182 lines (110 loc) · 9.71 KB

Planning

This section focuses on considerations to review when you plan your migration.

Migration tools

Migration Toolkit for Containers

The Migration Toolkit for Containers (MTC) migrates an application workload, including Kubernetes resources, data, and images, from an OpenShift 3 source cluster to an OpenShift 4 target cluster.

MTC performs the migration in two stages:

  1. The application workload is backed up from the source cluster to object storage.
  2. The application workload is restored to the target cluster from object storage.

Migrating Kubernetes resources

MTC migrates all namespaced resources, including Custom Resources. MTC can dynamically discover all the API resources in each referenced namespace.

MTC migrates some cluster-scoped resources. If a namespaced resource references a cluster-scoped resource, it is migrated. Migratable resources include persistent volumes (PVs) bound to a persistent volume claim, cluster role bindings, and security context constraints.

Migrating persistent volume data

MTC has two options for migrating persistent volume data:

  • Move: The PV definition is moved from the source cluster to the target cluster without touching the data. This is the fastest option.

  • Copy: MTC copies either a snapshot or the file system of the PV.

See PV move and PV copy for requirements and details.

Migrating internal images

Internal images created by S2I builds are migrated. Each ImageStream reference in a given namespace is copied to the registry of the target cluster.

When to use MTC

Ideally, you could migrate an application from one cluster to another by redeploying the application from a pipeline and perhaps copying the persistent volume data.

However, this might not be possible in the real world. A running application on the cluster might experience unforeseen changes and, over a period of time, drift away from the initial deploy. MTC can handle scenarios where you are not certain what your namespace contains and you want to migrate all its contents to a new cluster.

If you can redeploy your application from pipeline, that is the best option. If not, you should use MTC.

MTC documentation

Upstream migration tools

You can migrate PVs with pvc-migrate or images with imagestream-migrate.

These upstream tools offer advantages for large-scale migrations similar to the following example:

  • ~50+ ImageStreams per namespace
  • Multiple ~100GB+ persistent volumes

The tools are smaller and more focused. They are based on Ansible playbooks, Python code snippets, Rsync, and Skopeo, which simplifies customization and debugging. Their performance is better than MTC.

Comparison of MTC and upstream tools

MTC Upstream tools
Processing Serial processing.

MTC processes one migration plan at a time, one backup/restore operation at a time. (MTC plans to support parallel execution in the future. See velero-487.)
Parallel processing.

imagestream-migrate can perform parallel image migrations.
Copying Two copy processes.

Backup and restore.
Single copy process.
Debugging Challenging.

A migration error can span different clusters, namespaces, and controller logs.
Relatively easy.

Tools are based on Ansible and Python.
Customization Difficult.

Requires updating the Golang code, recompiling, and then updating the MTC Operator to deliver the new version.
Relatively easy.

Tools are based on Ansible and Python.

Combining MTC and upstream tools

You can use a combination of upstream tools and MTC for migration.

Before migration, check your environment for the following requirements:

  • There must be a direct network connection between the source and target clusters. A process running on each node of the source cluster must be able to connect to an exposed route on the target cluster.
  • The host running pvc-migrate requires root access to each node of the source cluster.
  • PVs must be OpenShift Container Storage. pvc-migrate does not support other storage providers.

The migration workflow is similar to the following procedure:

  1. Configure MTC to omit PVs and/or images from the migration plan by setting the following parameters in the Migration Controller manifest:
disable_image_migration: true
disable_pv_migration: true
  1. Migrate the application workload with MTC.

  2. Run pvc-migrate to migrate PVs and/or imagestream-migrate to migrate images.

Migration environment considerations

This section describes migration environment considerations to consider when you are planning your migration:

  • Consider how stored data will be migrated if you are migrating stateful applications.
  • Consider how much downtime your application can tolerate during migration.
  • Plan for traffic redirection during migration.

Migration workflows

MTC workflow

MTC migrates applications from OCP 3 to OCP 4 in production and non-production environments.

The following diagram describes the MTC workflow:

MTC-based

CI/CD workflow

A CI/CD pipeline deploys applications on OCP 4 production and non-production environments.

The following diagram describes a CI/CD workflow:

CI-CD-based

Network traffic migration strategies

This section describes strategies for migrating network traffic for stateless aplications.

Each strategy is based on this scenario:

  • Applications are deployed on the 4.x cluster.
  • If necessary, the 4.x router default certificate includes the 3.x wildcard SAN.
  • Each application adds an additional route with the 3.x host name.
  • Optional: The route with the 3.x host name contains an appropriate certificate.

"Big Bang" migration

At migration, the 3.x wildcard DNS record is changed to point to the 4.x router virtual IP address (VIP).

BigBang

Individual applications

At migration, a new record is created for each application with the 3.x FQDN/host name pointing to the 4.x router VIP. This record takes precedence over the 3.x wildcard DNS record.

Individual

Canary-style migration of individual applications

A VIP/proxy with two backends, the 3.x router VIP and the 4.x router VIP, is created for each application.

At migration, a new record is created for each application with the 3.x FQDN/host name pointing to the VIP/proxy. This record takes precedence over the 3.x wildcard DNS record.

The proxy entry for the application is configured to route X% of the traffic to the 3.x router VIP and (100-X)% of the traffic to the 4.x VIP.

X is gradually moved from 100 to 0.

Canary

Audience-based migration of individual applications

A VIP/proxy with two backends, the 3.x router VIP and the 4.x router VIP, is created for each application.

At migration, a new record is created for each application with the 3.x FQDN/host name pointing to the VIP/proxy. This record takes precedence over the 3.x wildcard DNS record.

The proxy entry for the application is configured to route traffic matching a given header pattern, for example, test customers, to the 4.x router VIP and the rest of the traffic to the 3.x VIP.

Traffic is moved to the 4.x VIP in waves until all the traffic is on the 4.x VIP.

Audience