You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The status of cluster Operators includes their condition type, which informs you of the current state of your Operator's health. The following definitions cover a list of some common ClusterOperator condition types. Operators that have additional condition types and use Operator-specific language have been omitted.
9
+
The status of cluster Operators includes their condition type, which informs you of the current state of your Operator's health. The following definitions cover a list of some common ClusterOperator condition types. Operators that have additional condition types and use Operator-specific language have been omitted.
10
10
11
11
The Cluster Version Operator (CVO) is responsible for collecting the status conditions from cluster Operators so that cluster administrators can better understand the state of the {product-title} cluster.
12
12
@@ -18,26 +18,26 @@ The Cluster Version Operator (CVO) is responsible for collecting the status cond
18
18
//----
19
19
20
20
21
-
* Available:
21
+
* Available:
22
22
The condition type `Available` indicates that an Operator is functional and available in the cluster. If the status is `False`, at least one part of the operand is non-functional and the condition requires an administrator to intervene.
23
23
24
24
* Progressing:
25
-
The condition type `Progressing` indicates that an Operator is actively rolling out new code, propagating configuration changes, or otherwise moving from one steady state to another.
25
+
The condition type `Progressing` indicates that an Operator is actively rolling out new code, propagating configuration changes, or otherwise moving from one steady state to another.
26
26
+
27
27
Operators do not report the condition type `Progressing` as `True` when they are reconciling a previous known state. If the observed cluster state has changed and the Operator is reacting to it, then the status reports back as `True`, since it is moving from one steady state to another.
28
28
+
29
29
* Degraded:
30
-
The condition type `Degraded` indicates that an Operator has a current state that does not match its required state over a period of time. The period of time can vary by component, but a `Degraded` status represents persistent observation of an Operator's condition. As a result, an Operator does not fluctuate in and out of the `Degraded` state.
30
+
The condition type `Degraded` indicates that an Operator has a current state that does not match its required state over a period of time. The period of time can vary by component, but a `Degraded` status represents persistent observation of an Operator's condition. As a result, an Operator does not fluctuate in and out of the `Degraded` state.
31
31
+
32
-
There might be a different condition type if the transition from one state to another does not persist over a long enough period to report `Degraded`.
33
-
An Operator does not report `Degraded` during the course of a normal upgrade. An Operator may report `Degraded` in response to a persistent infrastructure failure that requires eventual administrator intervention.
32
+
There might be a different condition type if the transition from one state to another does not persist over a long enough period to report `Degraded`.
33
+
An Operator does not report `Degraded` during the course of a normal update. An Operator may report `Degraded` in response to a persistent infrastructure failure that requires eventual administrator intervention.
34
34
+
35
35
[NOTE]
36
36
====
37
37
This condition type is only an indication that something may need investigation and adjustment. As long as the Operator is available, the `Degraded` condition does not cause user workload failure or application downtime.
38
38
====
39
39
+
40
40
* Upgradeable:
41
-
The condition type `Upgradeable` indicates whether the Operator is safe to upgrade based on the current cluster state. The message field contains a human-readable description of what the administrator needs to do for the cluster to successfully update. The CVO allows updates when this condition is `True`, `Unknown` or missing.
41
+
The condition type `Upgradeable` indicates whether the Operator is safe to update based on the current cluster state. The message field contains a human-readable description of what the administrator needs to do for the cluster to successfully update. The CVO allows updates when this condition is `True`, `Unknown` or missing.
42
42
+
43
43
When the `Upgradeable` status is `False`, only minor updates are impacted, and the CVO prevents the cluster from performing impacted updates unless forced.
Copy file name to clipboardExpand all lines: modules/understanding-upgrade-channels.adoc
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -85,11 +85,11 @@ ifndef::openshift-origin[]
85
85
86
86
Choosing the appropriate channel involves two decisions.
87
87
88
-
First, select the minor version you want for your cluster upgrade. Selecting a channel which matches your current version ensures that you only apply z-stream updates and do not receive feature updates. Selecting an available channel which has a version greater than your current version will ensure that after one or more updates your cluster will have updated to that version. Your cluster will only be offered channels which match its current version, the next version, or the next EUS version.
88
+
First, select the minor version you want for your cluster update. Selecting a channel which matches your current version ensures that you only apply z-stream updates and do not receive feature updates. Selecting an available channel which has a version greater than your current version will ensure that after one or more updates your cluster will have updated to that version. Your cluster will only be offered channels which match its current version, the next version, or the next EUS version.
89
89
90
90
[NOTE]
91
91
====
92
-
Due to the complexity involved in planning upgrades between versions many minors apart, channels that assist in planning upgrades beyond a single EUS-to-EUS update are not offered.
92
+
Due to the complexity involved in planning updates between versions many minors apart, channels that assist in planning updates beyond a single EUS-to-EUS update are not offered.
93
93
====
94
94
95
95
Second, you should choose your desired rollout strategy. You may choose to update as soon as Red Hat declares a release GA by selecting from fast channels or you may want to wait for Red Hat to promote releases to the stable channel. Update recommendations offered in the `fast-{product-version}` and `stable-{product-version}` are both fully supported and benefit equally from ongoing data analysis. The promotion delay before promoting a release to the stable channel represents the only difference between the two channels. Updates to the latest z-streams are generally promoted to the stable channel within a week or two, however the delay when initially rolling out updates to the latest minor is much longer, generally 45-90 days. Please consider the promotion delay when choosing your desired channel, as waiting for promotion to the stable channel may affect your scheduling plans.
Copy file name to clipboardExpand all lines: modules/update-service-overview.adoc
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@
9
9
10
10
The OpenShift Update Service (OSUS) provides update recommendations to {product-title}, including {op-system-first}. It provides a graph, or diagram, that contains the _vertices_ of component Operators and the _edges_ that connect them. The edges in the graph show which versions you can safely update to. The vertices are update payloads that specify the intended state of the managed cluster components.
11
11
12
-
The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the CVO uses the release image for that update to upgrade your cluster. The release artifacts are hosted in Quay as container images.
12
+
The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the CVO uses the corresponding release image to update your cluster. The release artifacts are hosted in Quay as container images.
13
13
////
14
14
By accepting automatic updates, you can automatically
15
15
keep your cluster up to date with the most recent compatible components.
@@ -30,10 +30,10 @@ Two controllers run during continuous update mode. The first controller continuo
30
30
31
31
[IMPORTANT]
32
32
====
33
-
Only upgrading to a newer version is supported. Reverting or rolling back your cluster to a previous version is not supported. If your update fails, contact Red Hat support.
33
+
Only updating to a newer version is supported. Reverting or rolling back your cluster to a previous version is not supported. If your update fails, contact Red Hat support.
34
34
====
35
35
36
-
During the update process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. The MCO cordons the number of nodes as specified by the `maxUnavailable` field on the machine configuration pool and marks them as unavailable. By default, this value is set to `1`. The MCO updates the affected nodes alphabetically by zone, based on the `topology.kubernetes.io/zone` label. If a zone has more than one node, the oldest nodes are updated first. For nodes that do not use zones, such as in bare metal deployments, the nodes are upgraded by age, with the oldest nodes updated first. The MCO updates the number of nodes as specified by the `maxUnavailable` field on the machine configuration pool at a time. The MCO then applies the new configuration and reboots the machine.
36
+
During the update process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. The MCO cordons the number of nodes specified by the `maxUnavailable` field on the machine configuration pool and marks them unavailable. By default, this value is set to `1`. The MCO updates the affected nodes alphabetically by zone, based on the `topology.kubernetes.io/zone` label. If a zone has more than one node, the oldest nodes are updated first. For nodes that do not use zones, such as in bare metal deployments, the nodes are updated by age, with the oldest nodes updated first. The MCO updates the number of nodes as specified by the `maxUnavailable` field on the machine configuration pool at a time. The MCO then applies the new configuration and reboots the machine.
37
37
38
38
If you use {op-system-base-full} machines as workers, the MCO does not update the kubelet because you must update the OpenShift API on the machines first.
xref:../updating/understanding-upgrade-channels-release.adoc#understanding-upgrade-channels-releases[Update channels and releases]: With upgrade channels, you can choose an upgrade strategy. Upgrade channels are specific to a minor version of {product-title}. Upgrade channels only control release selection and do not impact the version of the cluster that you install. The `openshift-install` binary file for a specific version of the {product-title} always installs that minor version. For more information, see the following:
20
+
xref:../updating/understanding-upgrade-channels-release.adoc#understanding-upgrade-channels-releases[Update channels and releases]: With update channels, you can choose an update strategy. Update channels are specific to a minor version of {product-title}. Update channels only control release selection and do not impact the version of the cluster that you install. The `openshift-install` binary file for a specific version of the {product-title} always installs that minor version. For more information, see the following:
21
21
22
22
* xref:../updating/understanding-upgrade-channels-release.adoc#upgrade-version-paths_understanding-upgrade-channels-releases[Upgrading version paths]
23
23
* xref:../updating/understanding-upgrade-channels-release.adoc#fast-stable-channel-strategies_understanding-upgrade-channels-releases[Understanding fast and stable channel use and strategies]
With upgrade channels, you can choose an upgrade strategy. Upgrade channels are specific to a minor version of {product-title}. Upgrade channels only control release selection and do not impact the version of the cluster that you install. The `openshift-install` binary file for a specific version of the {product-title} always installs that minor version.
29
+
With update channels, you can choose an update strategy. Update channels are specific to a minor version of {product-title}. Update channels only control release selection and do not impact the version of the cluster that you install. The `openshift-install` binary file for a specific version of the {product-title} always installs that minor version.
30
30
31
31
{product-title} {product-version} offers the following update channel:
32
32
@@ -116,7 +116,7 @@ Using hardware version 13 for your cluster nodes running on vSphere is now depre
xref:../updating/updating_a_cluster/updating-hosted-control-planes.adoc#updating-hosted-control-planes[Updating hosted control planes]: On hosted control planes for {product-title}, updates are decoupled between the control plane and the nodes. Your service cluster provider, which is the user that hosts the cluster control planes, can manage the updates as needed. The hosted cluster handles control plane updates, and node pools handle node upgrades. For more information, see the following information:
119
+
xref:../updating/updating_a_cluster/updating-hosted-control-planes.adoc#updating-hosted-control-planes[Updating hosted control planes]: On hosted control planes for {product-title}, updates are decoupled between the control plane and the nodes. Your service cluster provider, which is the user that hosts the cluster control planes, can manage the updates as needed. The hosted cluster handles control plane updates, and node pools handle node updates. For more information, see the following information:
120
120
121
121
* xref:../updating/updating_a_cluster/updating-hosted-control-planes.adoc#updates-for-hosted-control-planes_updating-hosted-control-planes[Updates for hosted control planes]
122
122
* xref:../updating/updating_a_cluster/updating-hosted-control-planes.adoc#updating-node-pools-for-hcp_updating-hosted-control-planes[Updating node pools for hosted control planes]
Copy file name to clipboardExpand all lines: updating/understanding-upgrade-channels-release.adoc
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,15 +26,15 @@ ifndef::openshift-origin[]
26
26
{product-title}{product-version} offers the following update channels:
27
27
28
28
* `stable-{product-version}`
29
-
* `eus-4.y` (only offered for EUS versions and meant to facilitate upgrades between EUS versions)
29
+
* `eus-4.y` (only offered for EUS versions and meant to facilitate updates between EUS versions)
30
30
* `fast-{product-version}`
31
31
* `candidate-{product-version}`
32
32
33
33
If you do not want the Cluster Version Operator to fetch available updates from the update recommendation service, you can use the `oc adm upgrade channel` command in the OpenShift CLI to configure an empty channel. This configuration can be helpful if, for example, a cluster has restricted network access and there is no local, reachable update recommendation service.
34
34
35
35
[WARNING]
36
36
====
37
-
Red Hat recommends upgrading to versions suggested by OpenShift Update Service only. For a minor version update, versions must be contiguous. Red Hat does not test updates to noncontiguous versions and cannot guarantee compatibility with earlier versions.
37
+
Red Hat recommends updating to versions suggested by OpenShift Update Service only. For a minor version update, versions must be contiguous. Red Hat does not test updates to noncontiguous versions and cannot guarantee compatibility with earlier versions.
0 commit comments