-
Notifications
You must be signed in to change notification settings - Fork 48
Description
Follow-up from PR #1150
Related comment: #1150 (comment)
Requested by: @bartoszmajsak
Three targets stand in the CI lane,
deploy, deploy-dev, deploy-ci by name.
Each waits on one CRD and moves along,
but the rest may lag — a race-prone song.
When multiple CRD sets are applied in full,
a single kubectl wait is not sufficient pull.
The controller may roll out before all CRDs are set,
and flaky failures are a debt we've not paid yet.
So harden the wait, let all CRDs be Established first,
quench the race condition's thirst.
Replace the lone wait with a list complete,
or use --all to make the cadence neat.
Problem
In all three targets (deploy, deploy-dev, deploy-ci), the kubectl wait --for=condition=established call gates on a single CRD even though multiple CRD kustomize overlays are applied beforehand. This creates a potential race condition in CI where other CRDs may not be Established before the controller is rolled out.
Suggested fix
Replace the single-CRD wait with either:
kubectl wait --for=condition=established --timeout=120s crd/inferenceservices.serving.kserve.io crd/trainedmodels.serving.kserve.io crd/clusterservingruntimes.serving.kserve.io crd/servingruntimes.serving.kserve.io crd/inferencegraphs.serving.kserve.io crd/clusterstoragecontainers.serving.kserve.io crd/llminferenceservices.serving.kserve.io crd/llminferenceserviceconfigs.serving.kserve.ioor a broader:
kubectl wait --for=condition=established --timeout=120s crd --allScope
Makefile:deploytargetMakefile:deploy-devtargetMakefile.overrides.mk:deploy-citarget
Metadata
Metadata
Assignees
Labels
Type
Projects
Status