You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Before this update, {aws-short} compute machine sets could include a null value for the `userDataSecret` parameter.
1342
+
Using a null value sometimes caused machines to get stuck in the `Provisioning` state. With this release, the `userDataSecret` parameter requires a value.
* Before this update, {product-title} clusters on {aws-short} that were created with version 4.13 or earlier could not update to version 4.19.
1346
+
Clusters that were created with version 4.14 and later have an {aws-short} `cloud-conf` ConfigMap by default, and this ConfigMap is required starting in {product-title} 4.19.
1347
+
With this release, the Cloud Controller Manager Operator creates a default `cloud-conf` ConfigMap when none is present on the cluster.
1348
+
This change enables clusters that were created with version 4.13 or earlier to update to version 4.19.
* Before this update, a `failed to find machine for node ...` appeared in the logs when the `InternalDNS` address for a machine was not set as expected.
1352
+
As a consequence, the user might interpret this error as the machine not existing.
1353
+
With this release, the log message reads `failed to find machine with InternalDNS matching ...`.
1354
+
As a result, the user has a clearer indication of why the match is failing.
* Before this update, a bug fix altered the availability set configuration by changing the fault domain count to use the maximum available value instead of being fixed at 2.
1358
+
This inadvertently caused scaling issues for compute machine sets that were created prior to the bug fix, because the controller attempted to modify immutable availability sets.
1359
+
With this release, availability sets are no longer modified after creation, allowing affected compute machine sets to scale properly.
* Before this update, compute machine sets migrating from the Cluster API to the Machine API got stuck in the `Migrating` state.
1363
+
As a consequence, the compute machine set could not finish transitioning to use a different authoritative API or perform further reconciliation of the `MachineSet` object status.
1364
+
With this release, the migration controllers watch for changes in Cluster API resources and react to authoritative API transitions.
1365
+
As a result, compute machine sets successfully transition from the Cluster API to the Machine API.
* Before this update, for the `maxUnhealthy` field in the `MachineHealthCheck` custom resource definition (CRD), it did not document the default value.
1369
+
With this release, the CRD documents the default value.
* Before this update, it was possible to specify the use of the `CapacityReservationsOnly` capacity reservation behavior and Spot Instances in the same machine template.
1373
+
As a consequence, machines with these two incompatible settings were created.
1374
+
With this release, validation of machine templates ensures that these two incompatible settings do not co-occur.
1375
+
As a result, machines with these two incompatible settings cannot be created.
* Before this update, on clusters that support migrating Machine API resources to Cluster API resources, deleting a nonauthoritative machine did not delete the corresponding authoritative machine.
1379
+
As a consequence, orphaned machines that should have been cleaned up remained on the cluster and could cause a resource leak.
1380
+
With this release, deleting a nonauthoritative machine triggers propagation of the deletion to the corresponding authoritative machine.
1381
+
As a result, deletion requests on nonauthoritative machine correctly cascade, preventing orphaned authoritative machines and ensuring consistency in machine cleanup.
* Before this update, on clusters that support migrating Machine API resources to Cluster API resources, the {cluster-capi-operator} could create an authoritative Cluster API compute machine set in the `Paused` state.
1385
+
As a consequence, the newly created Cluster API compute machine set could not reconcile or scale machines even though it was using the authoritative API.
1386
+
With this release, the Operator now ensures that Cluster API compute machine sets are created in an unpaused state when the Cluster API is authoritative.
1387
+
As a result, newly created Cluster API compute machine sets are reconciled immediately and scaling and machine lifecycle operations proceed as intended when the Cluster API is authoritative.
* Before this update, scaling large numbers of nodes was slow because scaling requires reconciling each machine several times and each machine was reconciled individually.
1391
+
With this release, up to ten machines can be reconciled concurrently.
1392
+
This change improves the processing speed for machines during scaling.
* Before this update, the {cluster-capi-operator} status controller used an unsorted list of related objects, leading to status updates when there were no functional changes.
1396
+
As a consequence, users would see significant noise in the {cluster-capi-operator} object and in logs due to continuous and unnecessary status updates.
1397
+
With this release, the status controller logic sorts the list of related objects before comparing them for changes.
1398
+
As a result, a status update only occurs when there is a change to the Operator's state.
* Before this update, the Control Plane Machine Set configuration used availability zones from compute machine sets.
1406
+
This is not a valid configuration.
1407
+
As a consequence, the Control Plane Machine Set could not be generated when the control plane machines were in a single zone while compute machine sets spanned multiple zones.
1408
+
With this release, the Control Plane Machine Set derives an availability zone configuration from existing control plane machines.
1409
+
As a result, the Control Plane Machine Set generates a valid zone configuration that accurately reflects the current control plane machines.
* Before this update, the controller that annotates a Machine API compute machine set did not check whether the Machine API was authoritative before adding scale-from-zero annotations.
1413
+
As a consequence, the controller repeatedly added these annotations and caused a loop of continuous changes to the `MachineSet` object.
1414
+
With this release, the controller checks the value of the `authoritativeAPI` field before adding scale-from-zero annotations.
1415
+
As a result, the controller avoids the looping behavior by only adding these annotations to a Machine API compute machine set when the Machine API is authoritative.
* Before this update, the Machine API Operator attempted to reconcile `Machine` resources on platforms other than {aws-short} where the `.status.authoritativeAPI` field was not populated.
1419
+
As a consequence, compute machines remained in the `Provisioning` state indefinitely and never became operational.
1420
+
With this release, the Machine API Operator now populates the empty `.status.authoritativeAPI` field with the corresponding value in the machine specification.
1421
+
A guard is also added to the controllers to handle cases where this field might still be empty.
1422
+
As a result, `Machine` and `MachineSet` resources are reconciled properly and compute machines no longer remain in the `Provisioning` state indefinitely.
* Before this update, the Machine API Provider Azure used an old version of the Azure SDK, which used an old API version that did not support referencing a Capacity Reservation group.
1426
+
As a consequence, creating a Machine API machine that referenced a Capacity Reservation group in another subscription resulted in an Azure API error.
1427
+
With this release, the Machine API Provider Azure uses a version of the Azure SDK that supports this configuration.
1428
+
As a result, creating a Machine API machine that references a Capacity Reservation group in another subscription works as expected.
* Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources did not correctly compare the machine specification when converting an authoritative Cluster API machine template to a Machine API machine set.
1432
+
As a consequence, changes to the Cluster API machine template specification were not synchronized to the Machine API machine set.
1433
+
With this release, changes to the comparison logic resolve the issue.
1434
+
As a result, the Machine API machine set synchronizes correctly after the Cluster API machine set references the new Cluster API machine template.
* Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources did not delete the machine template when its corresponding Machine API machine set was deleted.
1438
+
As a consequence, unneeded Cluster API machine templates persisted in the cluster and cluttered the `openshift-cluster-api` namespace.
1439
+
With this release, the two-way synchronization controller correctly handles deletion synchronization for the machine template.
1440
+
As a result, deleting a Machine API authoritative machine set deletes the corresponding Cluster API machine template.
* Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources prematurely reported a successful migration.
1444
+
As a consequence, if any errors occurred when updating the status of related objects, the operation was not retried.
1445
+
With this release, the controller ensures that all related object statuses are written before reporting a successful status.
1446
+
As a result, the controller handles errors during migration better.
@@ -2242,6 +2345,11 @@ In the following tables, features are marked with the following statuses:
2242
2345
+
2243
2346
There is no supported workaround for this issue. (link:https://issues.redhat.com/browse/OCPBUGS-57440[OCPBUGS-57440])
2244
2347
2348
+
* When installing a cluster on {azure-short}, if you set any of the `compute.platform.azure.identity.type`, `controlplane.platform.azure.identity.type`, or `platform.azure.defaultMachinePlatform.identity.type` field values to `None`, your cluster is unable to pull images from the Azure Container Registry.
2349
+
You can avoid this issue by providing a user-assigned identity or by leaving the identity field blank.
2350
+
In both cases, the installation program generates a user-assigned identity.
0 commit comments