-
Notifications
You must be signed in to change notification settings - Fork 304
Description
Is your feature request related to a problem? Please describe
Currently, scheduling constraints for CDI workload pods (importer, cloner, uploader) can only be configured globally via CDI.spec.workload. There is no way to specify different scheduling requirements for individual DataVolumes.
When using local storage provisioners, we need to control which node the importer pod runs on at the DV level.
Describe the solution you'd like
Add a workloadPlacement field to DataVolumeSpec, limited to scheduling-related fields only:
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
spec:
source: ...
pvc: ...
# New field - limited to scheduling only
workloadPlacement:
nodeSelector: {...}
affinity: {...}
tolerations: [...]This approach:
- Only exposes scheduling fields, no env, volumes, or other risky fields
- Reuses the same NodePlacement struct pattern as CDI.spec.workload
- Optional field; existing DVs continue to use global config
Precedence: DV-level workloadPlacement overrides CDI.spec.workload for that specific DataVolume. If DV-level is not specified, fall back to global config.
Describe alternatives you've considered
| Solution | Limitation |
|---|---|
| CDI.spec.workload | Cluster-wide only; cannot vary per DataVolume |
| WaitForFirstConsumer | Waits for consumer pod, but virt-launcher won't start until DV is ready (chicken-and-egg) |
| PV node affinity | Still in alpha; not considered production-ready |
| Third-party solutions (e.g., mutating webhooks) | Adds external dependencies and complexity |
We believe it would be better to leverage the native capabilities of DataVolume and importer pod directly, rather than relying on external workarounds.
Additional context
This issue is split from #3848 to discuss DV-level scheduling constraints separately from scratch space storage class configuration.