-
Notifications
You must be signed in to change notification settings - Fork 12
Description
Description
When creating a workload cluster using Cluster API with the Harvester provider (CAPHV v0.1.6), VM provisioning fails with a validation error from Harvester's admission webhook. The error states that either memory.guest or resources.limits.memory must be set, but CAPHV only sets resources.requests.memory.
Environment
- CAPHV Version: v0.1.6 (latest release)
- Harvester Version: v1.7.0
- Kubernetes Version: v1.29.8+rke2r1 (management cluster)
- RKE2 Version: v1.29.8+rke2r1 (workload cluster)
Steps to Reproduce
- Install CAPHV v0.1.6 in a Harvester management cluster
- Create a Cluster resource with HarvesterCluster infrastructure
- Create HarvesterMachineTemplate with memory specification (e.g.,
memory: 8Gi) - Apply the cluster manifest
- Observe the HarvesterMachine status
Expected Behavior
VMs should be created successfully in Harvester with the specified memory configuration.
Actual Behavior
VM creation fails with the following error:
Failed to create VM: admission webhook "validator.harvesterhci.io" denied the request: either memory.guest or resources.limits.memory must be set
Root Cause
In internal/controller/harvestermachine_controller.go, the buildVMTemplate function sets memory as a request:
Resources: kubevirtv1.ResourceRequirements{
Requests: v1.ResourceList{
"memory": resource.MustParse(hvScope.HarvesterMachine.Spec.Memory),
},
},However, Harvester's validator (v1.7.0) requires either:
domain.memory.guest- Guest memory visible to the VMresources.limits.memory- Memory limit for the VM
Setting only resources.requests.memory is insufficient and causes validation to fail.
Suggested Fix
Option 1: Set both request and limit (Recommended)
Resources: kubevirtv1.ResourceRequirements{
Requests: v1.ResourceList{
"memory": resource.MustParse(hvScope.HarvesterMachine.Spec.Memory),
},
Limits: v1.ResourceList{
"memory": resource.MustParse(hvScope.HarvesterMachine.Spec.Memory),
},
},Option 2: Set memory.guest in DomainSpec
Domain: kubevirtv1.DomainSpec{
// ... existing CPU, Devices, etc.
Memory: &kubevirtv1.Memory{
Guest: resource.MustParse(hvScope.HarvesterMachine.Spec.Memory),
},
Resources: kubevirtv1.ResourceRequirements{
Requests: v1.ResourceList{
"memory": resource.MustParse(hvScope.HarvesterMachine.Spec.Memory),
},
},
// ... rest of DomainSpec
},Recommendation
I recommend Option 1 (setting both request and limit) because:
- It's the simplest change
- It aligns with Kubernetes best practices for memory management
- It ensures the VM gets exactly the memory specified without overcommitment
- It's backward compatible with existing HarvesterMachine specs
Workaround
Currently, there is no workaround without modifying the CAPHV controller code or disabling Harvester's admission webhook (not recommended for production).
Additional Context
This issue affects all cluster creation scenarios using CAPHV v0.1.6 with Harvester v1.7.0. The validation was likely added in recent Harvester versions to enforce proper memory specification, but CAPHV hasn't been updated to comply with this requirement.
Impact
- Severity: High - Blocks all cluster creation
- Scope: All users of CAPHV v0.1.6 with Harvester v1.7.0+
Related Files
internal/controller/harvestermachine_controller.go-buildVMTemplate()function