Skip to content

Commit 1502b7c

Browse files
Merge pull request #41620 from stoobie/MIG-1017
MIG-1017 Document network bridge solution for air gapped customers
2 parents 54f4fc2 + b040dea commit 1502b7c

File tree

3 files changed

+126
-1
lines changed

3 files changed

+126
-1
lines changed

migrating_from_ocp_3_to_4/advanced-migration-options-3-4.adoc

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,13 @@ You can automate your migrations and modify the `MigPlan` and `MigrationControll
1111

1212
include::modules/migration-terminology.adoc[leveloffset=+1]
1313

14+
include::modules/migration-migrating-on-prem-to-cloud.adoc[leveloffset=+1]
15+
16+
[role="_additional-resources"]
17+
.Additional resources
18+
* For information about creating a MigCluster CR manifest for each remote cluster, see xref:../migrating_from_ocp_3_to_4/advanced-migration-options-3-4.adoc#migration-migrating-applications-api_advanced-migration-options-3-4[Migrating an application by using the {mtc-short} API].
19+
* For information about adding a cluster using the web console, see xref:../migrating_from_ocp_3_to_4/migrating-applications-3-4.adoc#migrating-applications-mtc-web-console_migrating-applications-3-4[Migrating your applications by using the {mtc-short} web console]
20+
1421
[id="migrating-applications-cli_{context}"]
1522
== Migrating applications by using the command line
1623

modules/migration-adding-cluster-to-cam.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ You can add a cluster to the {mtc-full} ({mtc-short}) web console.
1515
** You must specify the Azure resource group name for the cluster.
1616
** The clusters must be in the same Azure resource group.
1717
** The clusters must be in the same geographic location.
18-
* If you are using direct image migration, you must expose a route to
18+
* If you are using direct image migration, you must expose a route to the image registry of the source cluster.
1919
2020
.Procedure
2121

Lines changed: 118 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,118 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * migrating_from_ocp_3_to_4/advanced-migration-options-3-4.adoc
4+
// * migration_toolkit_for_containers/advanced-migration-options-mtc.adoc
5+
:_content-type: PROCEDURE
6+
[id="migration-migrating-applications-on-prem-to-cloud_{context}"]
7+
= Migrating an application from on-premises to a cloud-based cluster
8+
9+
You can migrate from a source cluster that is behind a firewall to a cloud-based destination cluster by establishing a network tunnel between the two clusters. The `crane tunnel-api` command establishes such a tunnel by creating a VPN tunnel on the source cluster and then connecting to a VPN server running on the destination cluster. The VPN server is exposed to the client using a load balancer address on the destination cluster.
10+
11+
A service created on the destination cluster exposes the source cluster's API to {mtc-short}, which is running on the destination cluster.
12+
13+
.Prerequisites
14+
15+
* The system that creates the VPN tunnel must have access and be logged in to both clusters.
16+
* It must be possible to create a load balancer on the destination cluster. Refer to your cloud provider to ensure this is possible.
17+
* Have names prepared to assign to namespaces, on both the source cluster and the destination cluster, in which to run the VPN tunnel. These namespaces should not be created in advance. For information about namespace rules, see \https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names.
18+
* When connecting multiple firewall-protected source clusters to the cloud cluster, each source cluster requires its own namespace.
19+
* OpenVPN server is installed on the destination cluster.
20+
* OpenVPN client is installed on the source cluster.
21+
* When configuring the source cluster in {mtc-short}, the API URL takes the form of `\https://proxied-cluster.<namespace>.svc.cluster.local:8443`.
22+
** If you use the API, see _Create a MigCluster CR manifest for each remote cluster_.
23+
** If you use the {mtc-short} web console, see _Migrating your applications using the {mtc-short} web console_.
24+
* The {mtc-short} web console and Migration Controller must be installed on the target cluster.
25+
26+
.Procedure
27+
28+
. Install the `crane` utility:
29+
+
30+
[source,terminal,subs="+quotes"]
31+
----
32+
$ podman cp $(podman create registry.redhat.io/rhmtc/openshift-migration-controller-rhel8:v1.7.0):/crane ./
33+
----
34+
. Log in remotely to a node on the source cluster and a node on the destination cluster.
35+
36+
. Obtain the cluster context for both clusters after logging in:
37+
+
38+
[source,terminal,subs="+quotes"]
39+
----
40+
$ oc config view
41+
----
42+
43+
. Establish a tunnel by entering the following command on the command system:
44+
+
45+
[source,terminal,sub="+quotes"]
46+
----
47+
$ crane tunnel-api [--namespace <namespace>] \
48+
--destination-context <destination-cluster> \
49+
--source-context <source-cluster>
50+
----
51+
+
52+
If you don't specify a namespace, the command uses the default value `openvpn`.
53+
+
54+
For example:
55+
+
56+
[source,terminal,subs="+quotes"]
57+
----
58+
$ crane tunnel-api --namespace my_tunnel \
59+
--destination-context openshift-migration/c131-e-us-east-containers-cloud-ibm-com/admin \
60+
--source-context default/192-168-122-171-nip-io:8443/admin
61+
----
62+
+
63+
[TIP]
64+
====
65+
See all available parameters for the `crane tunnel-api` command by entering `crane tunnel-api --help`.
66+
====
67+
+
68+
The command generates TSL/SSL Certificates. This process might take several minutes. A message appears when the process completes.
69+
+
70+
The OpenVPN server starts on the destination cluster and the OpenVPN client starts on the source cluster.
71+
+
72+
After a few minutes, the load balancer resolves on the source node.
73+
+
74+
[TIP]
75+
====
76+
You can view the log for the OpenVPN pods to check the status of this process by entering the following commands with root privileges:
77+
78+
[source,terminal,subs="+quotes"]
79+
----
80+
# oc get po -n <namespace>
81+
----
82+
83+
.Example output
84+
[source,terminal]
85+
----
86+
NAME READY STATUS RESTARTS AGE
87+
<pod_name> 2/2 Running 0 44s
88+
----
89+
90+
[source,terminal,subs="+quotes"]
91+
----
92+
# oc logs -f -n <namespace> <pod_name> -c openvpn
93+
----
94+
When the address of the load balancer is resolved, the message `Initialization Sequence Completed` appears at the end of the log.
95+
====
96+
97+
. On the OpenVPN server, which is on a destination control node, verify that the `openvpn` service and the `proxied-cluster` service are running:
98+
+
99+
[source,terminal,subs="+quotes"]
100+
----
101+
$ oc get service -n <namespace>
102+
----
103+
104+
. On the source node, get the service account (SA) token for the migration controller:
105+
+
106+
[source,terminal]
107+
----
108+
# oc sa get-token -n openshift-migration migration-controller
109+
----
110+
111+
. Open the {mtc-short} web console and add the source cluster, using the following values:
112+
+
113+
* *Cluster name*: The source cluster name.
114+
* *URL*: `proxied-cluster.<namespace>.svc.cluster.local:8443`. If you did not define a value for `<namespace>`, use `openvpn`.
115+
* *Service account token*: The token of the migration controller service account.
116+
* *Exposed route host to image registry*: `proxied-cluster.<namespace>.svc.cluster.local:5000`. If you did not define a value for `<namespace>`, use `openvpn`.
117+
118+
After {mtc-short} has successfully validated the connection, you can proceed to create and run a migration plan. The namespace for the source cluster should appear in the list of namespaces.

0 commit comments

Comments
 (0)