Skip to content

Commit 159391b

Browse files
authored
Merge pull request #63315 from opayne1/OCPBUGS-5756
OCPBUGS#6756: Adds conditional updates in web console and some reorg
2 parents 787d176 + c3a70e7 commit 159391b

File tree

5 files changed

+80
-24
lines changed

5 files changed

+80
-24
lines changed

modules/before-updating-ocp.adoc

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * updating/updating_a_cluster/updating-cluster-web-console.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="before-updating-ocp_{context}"]
7+
= Before updating the {product-title} cluster
8+
9+
Before updating, consider the following:
10+
11+
* You have recently backed up etcd.
12+
13+
* In `PodDisruptionBudget`, if `minAvailable` is set to `1`, the nodes are drained to apply pending machine configs that might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
14+
15+
* You might need to update the cloud provider resources for the new release if your cluster uses manually maintained credentials.
16+
17+
* You must review administrator acknowledgement requests, take any recommended actions, and provide the acknowledgement when you are ready.
18+
19+
* You can perform a partial update by updating the worker or custom pool nodes to accommodate the time it takes to update. You can pause and resume within the progress bar of each pool.

modules/update-changing-update-server-web.adoc

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,11 @@ ifdef::openshift-origin[]
1212
Changing the update server is optional.
1313
endif::openshift-origin[]
1414

15+
.Prerequisites
16+
* You have access to the cluster with `cluster-admin` privileges.
17+
18+
* You have access to the {product-title} web console.
19+
1520
.Procedure
1621

1722
. Navigate to *Administration* -> *Cluster Settings*, click *version*.
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * updating/updating_a_cluster/updating-cluster-web-console.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="update-conditional-web-console_{context}"]
7+
= Viewing conditional updates in the web console
8+
9+
You can view and assess the risks associated with particular updates with conditional updates.
10+
11+
.Prerequisites
12+
* You have access to the cluster with `cluster-admin` privileges.
13+
14+
* You have access to the {product-title} web console.
15+
16+
* Pause all `MachineHealthCheck` resources.
17+
18+
* Your Operators that were previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the next during a cluster update.
19+
20+
* Your machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing an advanced update strategy, such as a canary rollout, an EUS update, or a control-plane update.
21+
22+
.Procedure
23+
24+
. From the web console, click *Administration* -> *Cluster settings* page and review the contents of the *Details* tab.
25+
26+
. You can enable `Include supported but not recommended versions` in the `Select new version` dropdown of the *Update cluster* modal to populate the dropdown list with conditional updates.
27+
+
28+
[NOTE]
29+
====
30+
If a `Supported but not recommended` version is selected, more information is provided with potential issues with the version.
31+
====
32+
33+
. Review the notification detailing the potential risks to updating.

modules/update-upgrading-web.adoc

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,18 @@ link:https://access.redhat.com/downloads/content/290[in the errata section] of t
1818

1919
.Prerequisites
2020

21-
* Have access to the web console as a user with `admin` privileges.
21+
* Have access to the web console as a user with `cluster-admin` privileges.
22+
23+
* You have access to the {product-title} web console.
24+
2225
* Pause all `MachineHealthCheck` resources.
2326
27+
* Your Operators that were previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the next during a cluster update.
28+
29+
* Your machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy.
30+
31+
* Your {op-system-base}7 workers are replaced with {op-system-base}8 or {op-system} workers before updating to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 updates for {op-system-base} workers; those hosts must be replaced with a clean operating system install.
32+
2433
.Procedure
2534

2635
. From the web console, click *Administration* -> *Cluster Settings* and review the contents of the *Details* tab.

updating/updating_a_cluster/updating-cluster-web-console.adoc

Lines changed: 13 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -19,49 +19,39 @@ You can update, or upgrade, an {product-title} cluster by using the web console.
1919
Use the web console or `oc adm upgrade channel _<channel>_` to change the update channel. You can follow the steps in xref:../../updating/updating_a_cluster/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster using the CLI] to complete the update after you change to a {product-version} channel.
2020
====
2121

22-
== Prerequisites
23-
24-
* Have access to the cluster as a user with `admin` privileges.
25-
See xref:../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions].
26-
* Have a recent xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must restore your cluster to a previous state.
27-
* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before updating to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 updates for {op-system-base} workers; those hosts must be replaced with a clean operating system install.
28-
* Ensure all Operators previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the next during a cluster update. See xref:../../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators] for more information.
29-
* Ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy.
30-
//remove this???^ or maybe just add another bullet that you can break up the update?
31-
* To accommodate the time it takes to update, you are able to do a partial update by updating the worker or custom pool nodes. You can pause and resume within the progress bar of each pool.
32-
* If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../../updating/preparing_for_updates/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials].
33-
* Review the list of APIs that were removed in Kubernetes 1.27, migrate any affected components to use the new API version, and provide the administrator acknowledgment. For more information, see xref:../../updating/preparing_for_updates/updating-cluster-prepare.adoc#updating-cluster-prepare[Preparing to update to {product-title} 4.14].
34-
* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
22+
include::modules/before-updating-ocp.adoc[leveloffset=+1]
3523

3624
[IMPORTANT]
3725
====
3826
* When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a previous version is not supported. If your update is failing to complete, contact Red Hat support.
3927
* Using the `unsupportedConfigOverrides` section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster.
4028
====
4129

30+
include::modules/update-changing-update-server-web.adoc[leveloffset=+1]
31+
4232
[role="_additional-resources"]
4333
.Additional resources
4434

45-
* xref:../../architecture/architecture-installation.adoc#unmanaged-operators_architecture-installation[Support policy for unmanaged Operators]
46-
47-
include::modules/update-using-custom-machine-config-pools-canary.adoc[leveloffset=+1]
48-
49-
If you want to use the canary rollout update process, see xref:../../updating/updating_a_cluster/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools[Performing a canary rollout update].
35+
* xref:../../updating/understanding_updates/understanding-update-channels-release.adoc#understanding-update-channels-releases[Understanding update channels and releases]
5036
5137
include::modules/machine-health-checks-pausing-web-console.adoc[leveloffset=+1]
5238

53-
include::modules/updating-sno.adoc[leveloffset=+1]
39+
include::modules/update-upgrading-web.adoc[leveloffset=+1]
40+
41+
include::modules/update-conditional-web-console.adoc[leveloffset=+1]
5442

5543
[role="_additional-resources"]
5644
.Additional resources
5745

58-
* For information on which machine configuration changes require a reboot, see the note in xref:../../architecture/control-plane.adoc#about-machine-config-operator_control-plane[About the Machine Config Operator].
46+
* xref:../../updating/understanding_updates/understanding-update-channels-release.adoc#conditional-updates-overview_understanding-update-channels-releases[Update recommendations and Conditional Updates]
5947
60-
include::modules/update-upgrading-web.adoc[leveloffset=+1]
48+
include::modules/update-using-custom-machine-config-pools-canary.adoc[leveloffset=+1]
6149

62-
include::modules/update-changing-update-server-web.adoc[leveloffset=+1]
50+
If you want to use the canary rollout update process, see xref:../../updating/updating_a_cluster/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools[Performing a canary rollout update].
51+
52+
include::modules/updating-sno.adoc[leveloffset=+1]
6353

6454
[role="_additional-resources"]
6555
.Additional resources
6656

67-
* xref:../../updating/understanding_updates/understanding-update-channels-release.adoc#understanding-update-channels-releases[Understanding update channels and releases]
57+
* xref:../../architecture/control-plane.adoc#about-machine-config-operator_control-plane[About the Machine Config Operator].

0 commit comments

Comments
 (0)