Skip to content

[BUG] VM creation fails: "either memory.guest or resources.limits.memory must be set" #139

@irony

Description

@irony

Description

When creating a workload cluster using Cluster API with the Harvester provider (CAPHV v0.1.6), VM provisioning fails with a validation error from Harvester's admission webhook. The error states that either memory.guest or resources.limits.memory must be set, but CAPHV only sets resources.requests.memory.

Environment

  • CAPHV Version: v0.1.6 (latest release)
  • Harvester Version: v1.7.0
  • Kubernetes Version: v1.29.8+rke2r1 (management cluster)
  • RKE2 Version: v1.29.8+rke2r1 (workload cluster)

Steps to Reproduce

  1. Install CAPHV v0.1.6 in a Harvester management cluster
  2. Create a Cluster resource with HarvesterCluster infrastructure
  3. Create HarvesterMachineTemplate with memory specification (e.g., memory: 8Gi)
  4. Apply the cluster manifest
  5. Observe the HarvesterMachine status

Expected Behavior

VMs should be created successfully in Harvester with the specified memory configuration.

Actual Behavior

VM creation fails with the following error:

Failed to create VM: admission webhook "validator.harvesterhci.io" denied the request: either memory.guest or resources.limits.memory must be set

Root Cause

In internal/controller/harvestermachine_controller.go, the buildVMTemplate function sets memory as a request:

Resources: kubevirtv1.ResourceRequirements{
    Requests: v1.ResourceList{
        "memory": resource.MustParse(hvScope.HarvesterMachine.Spec.Memory),
    },
},

However, Harvester's validator (v1.7.0) requires either:

  1. domain.memory.guest - Guest memory visible to the VM
  2. resources.limits.memory - Memory limit for the VM

Setting only resources.requests.memory is insufficient and causes validation to fail.

Suggested Fix

Option 1: Set both request and limit (Recommended)

Resources: kubevirtv1.ResourceRequirements{
    Requests: v1.ResourceList{
        "memory": resource.MustParse(hvScope.HarvesterMachine.Spec.Memory),
    },
    Limits: v1.ResourceList{
        "memory": resource.MustParse(hvScope.HarvesterMachine.Spec.Memory),
    },
},

Option 2: Set memory.guest in DomainSpec

Domain: kubevirtv1.DomainSpec{
    // ... existing CPU, Devices, etc.
    Memory: &kubevirtv1.Memory{
        Guest: resource.MustParse(hvScope.HarvesterMachine.Spec.Memory),
    },
    Resources: kubevirtv1.ResourceRequirements{
        Requests: v1.ResourceList{
            "memory": resource.MustParse(hvScope.HarvesterMachine.Spec.Memory),
        },
    },
    // ... rest of DomainSpec
},

Recommendation

I recommend Option 1 (setting both request and limit) because:

  1. It's the simplest change
  2. It aligns with Kubernetes best practices for memory management
  3. It ensures the VM gets exactly the memory specified without overcommitment
  4. It's backward compatible with existing HarvesterMachine specs

Workaround

Currently, there is no workaround without modifying the CAPHV controller code or disabling Harvester's admission webhook (not recommended for production).

Additional Context

This issue affects all cluster creation scenarios using CAPHV v0.1.6 with Harvester v1.7.0. The validation was likely added in recent Harvester versions to enforce proper memory specification, but CAPHV hasn't been updated to comply with this requirement.

Impact

  • Severity: High - Blocks all cluster creation
  • Scope: All users of CAPHV v0.1.6 with Harvester v1.7.0+

Related Files

  • internal/controller/harvestermachine_controller.go - buildVMTemplate() function

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions