You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: release_notes/ocp-4-18-release-notes.adoc
+45Lines changed: 45 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3023,6 +3023,51 @@ This section will continue to be updated over time to provide notes on enhanceme
3023
3023
For any {product-title} release, always review the instructions on xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[updating your cluster] properly.
3024
3024
====
3025
3025
3026
+
// 4.18.22
3027
+
[id="ocp-4-18-22_{context}"]
3028
+
=== RHSA-2025:13325 - {product-title} {product-version}.22 bug fix and security update
3029
+
3030
+
Issued: 13 August 2025
3031
+
3032
+
{product-title} release {product-version}.22 is now available. The list of bug fixes that are included in the update is documented in the link:https://access.redhat.com/errata/RHSA-2025:13325[RHSA-2025:13325] advisory. The RPM packages that are included in the update are provided by the link:https://access.redhat.com/errata/RHBA-2025:13326[RHBA-2025:13326] advisory.
3033
+
3034
+
Space precluded documenting all of the container images for this release in the advisory.
3035
+
3036
+
You can view the container images in this release by running the following command:
3037
+
3038
+
[source,terminal]
3039
+
----
3040
+
$ oc adm release info 4.18.22 --pullspecs
3041
+
----
3042
+
3043
+
[id="ocp-4-18-22-bug-fixes_{context}"]
3044
+
==== Bug fixes
3045
+
3046
+
* Before this update, destroying a cluster in the unsupported region `mx-central-1` caused the destroyer to fail to find a partition and not exit. As a consequence, you could not destroy a Red Hat OpenShift Container Platform cluster in the `mx-central-1`region due to constant error reporting. With this release, the destroyer does not report errors for the unsupported region `mx-central-1`, which enables the successful destruction of a cluster. (link:https://issues.redhat.com/browse/OCPBUGS-49749[OCPBUGS-49749])
3047
+
3048
+
* Before this update, destroying a cluster in the unsupported region `mx-central-1` caused the destroyer to fail to find a partition and not exit. As a consequence, you could not destroy a Red Hat OpenShift Container Platform cluster in the `mx-central-1`region due to constant error reporting. With this release, the destroyer does not report errors for the unsupported region `mx-central-1`, which enables the successful destruction of a cluster. (link:https://issues.redhat.com/browse/OCPBUGS-56177[OCPBUGS-56177])
3049
+
3050
+
* Before this update, combined specification and status updates lists triggered unnecessary firmware upgrades, which caused system downtime. With this release, a firmware upgrade optimization skips unnecessary firmware upgrades. (link:https://issues.redhat.com/browse/OCPBUGS-56766[OCPBUGS-56766])
3051
+
3052
+
* Before this update, the `console-telemetry` plugin received a `Forbidden` error due to using the wrong API endpoint for tracking usage. As a consequence, the `Forbidden error in console-telemetry-plugin usage tracking` error occurred. With this release, the `console-telemetry` plugin posts usage data to `/api/metrics/usage` instead of `/metrics/usage`. As a result, the `console-telemetry`plugin does not receive a `Forbidden` error, which ensures accurate usage tracking. (link:https://issues.redhat.com/browse/OCPBUGS-58364[OCPBUGS-58364])
3053
+
3054
+
* Before this update, the installer failed when {aws-first} credentials were not found and the survey was attempting to list all {aws-short} regions preventing users from creating the install-config. With this release, the installer no longer fails when {aws-short} credentials are not set, allow users to input them during the survey. (link:https://issues.redhat.com/browse/OCPBUGS-59155[OCPBUGS-59155])
3055
+
3056
+
* Before this update, when a `HostedCluster` was configured with a proxy URL such as http://user:pass@host, the authentication header was not getting forwarded by the konnectivity proxy to the user proxy, failing authentication. With this release, the proper authentication header is sent when a user and password is specified in the proxy URL. (link:https://issues.redhat.com/browse/OCPBUGS-59503[OCPBUGS-59503])
3057
+
3058
+
* Before this update, the upgrade to 4.18.15 introduced stale Network Address Translation (NAT) handling for egress IPs, which led to duplicate NAT's and hosts with the same egress IP allocation. As a consequence, egress IP allocation became inconsistent and unreliable, which caused traffic flow disruptions. With this release, the egress IP controller removes stale NAT and Source Network Address Translation (SNAT) rules for deleted nodes and pods. As a result, egress IP allocation is reliable and consistent. (link:https://issues.redhat.com/browse/OCPBUGS-59531[OCPBUGS-59531])
3059
+
3060
+
* Before this update, the `oc-mirror`did not detect Helm Chart images that used an aliased sub-chart. As a consequence, the Helm Chart images were missing after mirroring. With this release, the `oc-mirror`detects and mirrors Helm Chart images with an aliased sub-chart. (link:https://issues.redhat.com/browse/OCPBUGS-59798[OCPBUGS-59798])
3061
+
3062
+
* Before this update, when you cloned a .tar file with zero length, the `oc-mirror` ran indefinitely due to an empty archive file. As a consequence, no progress occurred when you mirrored a 0-byte .tar file. With this release, 0-byte tar files are detected and reported as errors, which prevents the `oc-mirror` from hanging. (link:https://issues.redhat.com/browse/OCPBUGS-59864[OCPBUGS-59864])
3063
+
3064
+
* Before this update, in multi-zone clusters with only a single worker per zone, if the Monitoring Operator's Prometheus pods were scheduled to nodes that reboot back-to-back and both reboots took longer than 15 minutes to return to service, the Monitoring Operator might have degraded. With this release, the time-out has been extended to 20 minutes to prevent the Monitoring Operator from entering a degraded state on common cluster topologies. Clusters where the two nodes with Prometheus pods reboot back-to-back and take more than 20 minutes might still report a degraded state until the second node and Prometheus pod return to a normal state.(link:https://issues.redhat.com/browse/OCPBUGS-59962[OCPBUGS-59962])
3065
+
3066
+
3067
+
[id="ocp-4-18-22-updating_{context}"]
3068
+
==== Updating
3069
+
To update an {product-title} 4.18 cluster to this latest release, see xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster using the CLI].
3070
+
3026
3071
// 4.18.21
3027
3072
[id="ocp-4-18-21_{context}"]
3028
3073
=== RHSA-2025:11677 - {product-title} {product-version}.21 bug fix and security update
0 commit comments