-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Description
Which jobs are flaking?
capi-e2e-latestk8s-main
capi-e2e-main
capi-e2e-mink8s-main
Which tests are flaking?
- When performing chained upgrades for workload cluster using ClusterClass in a different NS with RuntimeSDK [ClusterClass] Should create, upgrade and delete a workload cluster [ClusterClass]
- When upgrading a workload cluster using ClusterClass in a different NS with RuntimeSDK [ClusterClass] Should create, upgrade and delete a workload cluster [ClusterClass]
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/periodic-cluster-api-e2e-latestk8s-main/1970292728809918464
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/periodic-cluster-api-e2e-latestk8s-main/1970428625593307136
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/periodic-cluster-api-e2e-main/1970373260159750144
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/periodic-cluster-api-e2e-mink8s-main/1968805162394849280
Since when has it been flaking?
Could see first occurrence on 9/13/2025
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/periodic-cluster-api-e2e-main/1966700245983170560
Testgrid link
https://testgrid.k8s.io/cluster-api-core-main#capi-e2e-latestk8s-main
Reason for failure (if possible)
Timed out after 10.001s. Failed to wait for workers to reach version v1.34.1 and Nodes to become healthy Expected <bool>: false to be true failed [FAILED] Timed out after 10.001s. Failed to wait for workers to reach version v1.34.1 and Nodes to become healthy Expected <bool>: false to be true In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade_runtimesdk.go:1047 @ 09/23/25 11:58:03.657
Tests fails on different versions with same error.
Anything else we need to know?
No response
Label(s) to be applied
/kind flake
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status