You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/nw-kuryr-cleanup.adoc
+33-26Lines changed: 33 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,9 +12,15 @@ plugin, you must clean up the resources that Kuryr created previously.
12
12
[NOTE]
13
13
====
14
14
The clean up process relies on a Python virtual environment to ensure that the package versions that you use support tags for Octavia objects. You do not need a virtual environment if you are certain that your environment uses at minimum:
15
-
* `openstacksdk` version 0.54.0
16
-
* `python-openstackclient` version 5.5.0
17
-
* `python-octaviaclient` version 2.3.0
15
+
16
+
* The `openstacksdk` Python package version 0.54.0
17
+
18
+
* The `python-openstackclient` Python package version 5.5.0
19
+
20
+
* The `python-octaviaclient` Python package version 2.3.0
21
+
22
+
If you decide to use these particular versions, be sure to pull `python-neutronclient` prior to version 9.0.0, as it prevents you from accessing trunks.
. To remove Kuryr finalizers from all pods, enter the following command:
@@ -184,9 +190,10 @@ This command deletes the `KuryrPort` CRs.
184
190
+
185
191
[source,terminal]
186
192
----
187
-
(venv) $ read -ra trunks <<< $(python -c "import openstack; n = openstack.connect().network; print(''.join([x.id for x in n.trunks(any_tags='$CLUSTERTAG')]))") && \
193
+
(venv) $ mapfile trunks < <(python -c "import openstack; n = openstack.connect().network; print('\n'.join([x.id for x in n.trunks(any_tags='$CLUSTERTAG')]))") && \
for port in $(python -c "import openstack; n = openstack.connect().network; print(' '.join([x.id for x in n.ports(network_id='$netID') if x.device_owner != 'network:router_interface']))"); do
219
-
( openstack port delete $port ) &
226
+
( openstack port delete "${port}" ) &
220
227
221
228
# Only allow 20 jobs in parallel.
222
229
if [[ $(jobs -r -p | wc -l) -ge 20 ]]; then
@@ -226,26 +233,26 @@ for kn in "${kuryrnetworks[@]}"; do
. To remove the Kuryr security group, enter the following command:
237
244
+
238
245
[source,terminal]
239
246
----
240
-
(venv) $ openstack security group delete ${CLUSTERID}-kuryr-pods-security-group
247
+
(venv) $ openstack security group delete "${CLUSTERID}-kuryr-pods-security-group"
241
248
----
242
249
243
250
. To remove all tagged subnet pools, enter the following command:
244
251
+
245
252
[source,terminal]
246
253
----
247
-
(venv) $ for subnetpool in $(openstack subnet pool list --tags $CLUSTERTAG -f value -c ID); do
248
-
openstack subnet pool delete $subnetpool
254
+
(venv) $ for subnetpool in $(openstack subnet pool list --tags "${CLUSTERTAG}" -f value -c ID); do
255
+
openstack subnet pool delete "${subnetpool}"
249
256
done
250
257
----
251
258
@@ -254,7 +261,7 @@ done
254
261
[source,terminal]
255
262
----
256
263
(venv) $ networks=$(oc get kuryrnetwork -A --no-headers -o custom-columns=":status.netId") && \
257
-
for existingNet in $(openstack network list --tags $CLUSTERTAG -f value -c ID); do
264
+
for existingNet in $(openstack network list --tags "${CLUSTERTAG}" -f value -c ID); do
258
265
if [[ $networks =~ $existingNet ]]; then
259
266
echo "Network still exists: $existingNet"
260
267
fi
@@ -268,7 +275,7 @@ If the command returns any existing networks, intestigate and remove them before
268
275
[source,terminal]
269
276
----
270
277
(venv) $ for sgid in $(openstack security group list -f value -c ID -c Description | grep 'Kuryr-Kubernetes Network Policy' | cut -f 1 -d ' '); do
271
-
openstack security group delete $sgid
278
+
openstack security group delete "${sgid}"
272
279
done
273
280
----
274
281
@@ -283,7 +290,7 @@ done
283
290
+
284
291
[source,terminal]
285
292
----
286
-
(venv) $ if $(python3 -c "import sys; import openstack; n = openstack.connect().network; r = n.get_router('$ROUTERID'); sys.exit(0) if r.description != 'Created By OpenShift Installer' else sys.exit(1)"); then
287
-
openstack router delete $ROUTERID
293
+
(venv) $ if python3 -c "import sys; import openstack; n = openstack.connect().network; r = n.get_router('$ROUTERID'); sys.exit(0) if r.description != 'Created By OpenShift Installer' else sys.exit(1)"; then
** To specify a different cluster network IP address block, enter the following command:
@@ -250,24 +250,6 @@ You cannot change the service network address block during the migration.
250
250
You cannot use any CIDR block that overlaps with the `100.64.0.0/16` CIDR block because the OVN-Kubernetes network provider uses this block internally.
251
251
====
252
252
253
-
. Verify that the Multus daemon set rollout is complete by entering the following command:
254
-
+
255
-
[source,terminal]
256
-
----
257
-
$ oc -n openshift-multus rollout status daemonset/multus
258
-
----
259
-
+
260
-
The name of the Multus pods is in the form of `multus-<xxxxx>`, where `<xxxxx>` is a random sequence of letters. It might take several moments for the pods to restart.
261
-
+
262
-
.Example output
263
-
[source,text]
264
-
----
265
-
Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated...
266
-
...
267
-
Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available...
268
-
daemon set "multus" successfully rolled out
269
-
----
270
-
271
253
. To complete the migration, reboot each node in your cluster. For example, you can use a bash script similar to the following example. The script assumes that you can connect to each host by using `ssh` and that you have configured `sudo` to not prompt for a password:
272
254
+
273
255
[source,bash]
@@ -286,7 +268,7 @@ done
286
268
If SSH access is not available, you can use the `openstack` command:
287
269
[source,terminal]
288
270
----
289
-
$ for name in $(openstack server list --name ${CLUSTERID}\* -f value -c Name); do openstack server reboot $name; done
271
+
$ for name in $(openstack server list --name "${CLUSTERID}*" -f value -c Name); do openstack server reboot "${name}"; done
290
272
----
291
273
Alternatively, you might be able to to reboot each node through the management portal for
292
274
your infrastructure provider. Otherwise, contact the appropriate authority to
@@ -341,6 +323,6 @@ You might encounter pods that have a `Terminating` state due to finalizers that
0 commit comments