Add brownfield support and production-ready module enhancements#25
Conversation
bootstrap: - Add network_type (bridge/masquerade), network_interface_name, network_name, network_mac_address for bridge-network VMs - Add create_ssh_key / ssh_key_ids to attach existing keys instead of generating new ones - Add create_cloudinit_secret / existing_cloudinit_secret_name for brownfield VMs whose cloud-init already ran - Add vm_cpu, vm_disk_name, vm_disk_size, vm_disk_auto_delete for configurable hardware matching imported VMs - Add enable_usb_tablet for console cursor behaviour - Add create_lb / static_rancher_ip to skip LB when VM is directly reachable via bridge IP - Remove rancher2_bootstrap resource (caller manages password setup) - Remove rancher2 provider from required_providers management/harvester-integration: - Replace rancher2_setting with rancher2_feature for feature flags - Add manage_feature_flags, create_cloud_credential, apply_registration, manage_app skip flags for brownfield environments - Remove kubernetes provider and CoreDNS ConfigMap patch resource - Replace rancher_hostname/rancher_lb_ip with cloud_credential_name and cluster_labels variables - Rename catalog from "harvester-extensions" to "harvester" management/cluster-roles: - Add vm-manager role template: full VM lifecycle (create, power ops, console/VNC, delete) scoped to tenant project - Update README with vm-manager docs and usage examples identity/providers/asgardeo: - Add requested_claims variable with defaults for email, username, and groups (minimum for Rancher RBAC) identity/rancher-oidc: - Add groups_field variable to configure JWT group claim name workloads/vm: - New module for provisioning tenant VMs within a Harvester namespace Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
WalkthroughRefactors bootstrap for conditional SSH/cloud‑init/LB provisioning, adds a reusable Harvester VM workload module, exposes OIDC claims/groups configuration, introduces vm/network manager role templates, and makes Harvester-integration resources and app/credential/registration optional via new toggles. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
modules/management/cluster-roles/README.md (1)
63-89:⚠️ Potential issue | 🟠 MajorUse one
tenant-spacemodule per tenant in the example.
tenant-spacealready supports multiplegroup_role_bindings. Showing a secondtenant-spacemodule just to add the observer role implies that the same tenant/project should be managed twice, which is likely to collide on project/namespace creation.Example rewrite
-# Bind a team group to the vm-manager role in their tenant space module "tenant_space_iam" { source = "github.com/wso2-enterprise/open-cloud-datacenter//modules/management/tenant-space?ref=v0.1.x" ... group_role_bindings = [ { group_principal_id = var.iam_team_group_id role_template_id = "project-member" }, { group_principal_id = var.iam_team_group_id role_template_id = module.cluster_roles.vm_manager_role_id }, + { + group_principal_id = var.sre_group_id + role_template_id = module.cluster_roles.vm_metrics_observer_role_id + }, ] } - -# Bind a read-only observer (e.g. SRE) to metrics-only access -module "tenant_space_iam_observer" { - source = "github.com/wso2-enterprise/open-cloud-datacenter//modules/management/tenant-space?ref=v0.1.x" - ... - group_role_bindings = [ - { - group_principal_id = var.sre_group_id - role_template_id = module.cluster_roles.vm_metrics_observer_role_id - }, - ] -}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/management/cluster-roles/README.md` around lines 63 - 89, The README shows two separate tenant-space module blocks (module "tenant_space_iam" and module "tenant_space_iam_observer") which is misleading because tenant-space accepts multiple group_role_bindings; change the example to use a single tenant-space module and combine both role entries into one group_role_bindings list (include entries for var.iam_team_group_id with module.cluster_roles.vm_manager_role_id and var.sre_group_id with module.cluster_roles.vm_metrics_observer_role_id) so the tenant/project is created once and all bindings applied atomically.
🧹 Nitpick comments (4)
modules/identity/rancher-oidc/variables.tf (1)
53-57: Add input validation forgroups_fieldto prevent silent auth misconfiguration.At Line 53,
groups_fieldaccepts any string, including empty/whitespace values that can break group-to-RBAC mapping at runtime.♻️ Suggested hardening
variable "groups_field" { type = string description = "JWT claim name that contains the user's group memberships (e.g. \"groups\")." default = "groups" + validation { + condition = length(trim(var.groups_field)) > 0 && length(regexall("\\s", var.groups_field)) == 0 + error_message = "groups_field must be a non-empty JWT claim name without whitespace." + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/identity/rancher-oidc/variables.tf` around lines 53 - 57, The variable "groups_field" currently allows empty or whitespace-only strings which can silently break group-to-RBAC mapping; add a Terraform validation block on variable "groups_field" that enforces non-empty, non-whitespace values (e.g., trim(var.groups_field) != "" or length(regexall("\\S", var.groups_field)) > 0) and provide a clear error_message describing that groups_field must be a non-empty JWT claim name; update the variable "groups_field" declaration to include this validation so misconfiguration fails fast.modules/bootstrap/variables.tf (2)
156-191: Add validation for IP pool variables whencreate_lb = true.The IP pool variables (
ippool_subnet,ippool_gateway,ippool_start,ippool_end) default to""but are required whencreate_lb = true. Without validation, users will encounter cryptic errors from the Harvester provider when these are empty.🛡️ Proposed validation example for ippool_subnet
variable "ippool_subnet" { type = string description = "Subnet CIDR for the IP pool (e.g. '192.168.10.0/24'). Required when create_lb = true." default = "" validation { condition = !var.create_lb || var.ippool_subnet != "" error_message = "ippool_subnet is required when create_lb = true." } }Apply similar validations to
ippool_gateway,ippool_start, andippool_end.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/bootstrap/variables.tf` around lines 156 - 191, Add validation blocks to require ippool_subnet, ippool_gateway, ippool_start, and ippool_end when create_lb is true: for each variable (ippool_subnet, ippool_gateway, ippool_start, ippool_end) add a validation { condition = !var.create_lb || var.<name> != "" ; error_message = "<name> is required when create_lb = true." } so Terraform will fail with a clear message instead of letting the Harvester provider error on empty values; update the variable blocks for ippool_subnet, ippool_gateway, ippool_start and ippool_end accordingly.
62-90: Add validation fornetwork_namewhennetwork_type = "bridge".When using bridge networking,
network_nameis required to specify the NetworkAttachmentDefinition, but there's no validation to enforce this. Users will get a confusing error at plan/apply time if they forget to set it.🛡️ Proposed validation
variable "network_name" { type = string description = "NetworkAttachmentDefinition name for bridge networks (e.g. 'iaas-net/vm-subnet-001'). Only used when network_type = 'bridge'." default = "" + + validation { + condition = var.network_type != "bridge" || var.network_name != "" + error_message = "network_name is required when network_type = 'bridge'." + } }Note: Cross-variable validation in the
validationblock requires Terraform 1.9+. For earlier versions, use acheckblock orlocals-based precondition.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/bootstrap/variables.tf` around lines 62 - 90, Add a cross-variable validation that enforces network_name is non-empty when network_type == "bridge": update the variable "network_name" (or add a root-level validation/check) to fail with a clear error if var.network_type == "bridge" and var.network_name == "" (or trimspace(var.network_name) == ""), or alternatively add a precondition/check using locals to validate the same for Terraform versions <1.9; reference the variable blocks "network_type" and "network_name" when implementing the check so the error message explicitly instructs the user to set network_name for bridge networks.modules/management/harvester-integration/main.tf (1)
49-64: Consider makingchart_versionconfigurable.The Harvester UI extension chart version is hardcoded to
"1.7.1"(line 56). This may cause drift when Harvester releases updates. Consider exposing this as a variable with a sensible default.♻️ Suggested change
Add to
variables.tf:variable "harvester_chart_version" { type = string description = "Version of the Harvester UI extension Helm chart." default = "1.7.1" }Update
main.tf:- chart_version = "1.7.1" + chart_version = var.harvester_chart_version🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/management/harvester-integration/main.tf` around lines 49 - 64, Replace the hardcoded chart_version in resource rancher2_app_v2.harvester with a configurable variable (e.g., var.harvester_chart_version) and add a corresponding variable "harvester_chart_version" to variables.tf with type string, a descriptive message, and a sensible default of "1.7.1"; update any references to chart_version in the module to use this variable so the chart can be updated without changing the code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@modules/bootstrap/main.tf`:
- Around line 30-46: The cloud-init resource references
tls_private_key.bootstrap_key[0].public_key_openssh while create_ssh_key can be
false, causing an invalid index when create_cloudinit_secret is true; add a
validation so create_cloudinit_secret cannot be true unless create_ssh_key is
true: update the variable/validation logic for create_cloudinit_secret (or add a
locals precondition or a Terraform 1.5+ check block named e.g.
ssh_key_required_for_cloudinit) to assert !var.create_cloudinit_secret ||
var.create_ssh_key and provide a clear error_message indicating "create_ssh_key
must be true when create_cloudinit_secret is true", ensuring the module never
tries to read tls_private_key.bootstrap_key when it may be absent.
- Around line 48-54: The local cloudinit_secret_name can resolve to an empty
string when var.create_cloudinit_secret is false because
existing_cloudinit_secret_name defaults to "", so add validation to the variable
existing_cloudinit_secret_name (or a check block) to require a non-empty value
whenever var.create_cloudinit_secret is false; update variables.tf to include a
validation or top-level check that enforces var.create_cloudinit_secret ||
var.existing_cloudinit_secret_name != "" and provide a clear error message
stating that existing_cloudinit_secret_name is required when
create_cloudinit_secret = false so locals.cloudinit_secret_name never becomes an
empty string.
In `@modules/bootstrap/variables.tf`:
- Around line 137-154: The example at modules/bootstrap/examples/basic/main.tf
still references the removed variable rancher_admin_password; update both
occurrences to use the new variable name bootstrap_password (matching the
module's variable bootstrap_password) and ensure any references (e.g., in module
"bootstrap" block arguments or locals) are replaced consistently so the example
compiles against the module's variables (also verify any documentation/comments
in that example that mention rancher_admin_password are updated to
bootstrap_password).
In `@modules/management/cluster-roles/main.tf`:
- Around line 41-46: The RBAC rule for the resources ["virtualmachineimages",
"keypairs"] only grants ["get","list","watch"], which prevents vm-manager from
creating/deleting harvester_ssh_key.this; update the rules block that contains
api_groups = ["harvesterhci.io"] and resources =
["virtualmachineimages","keypairs"] to include the "create" and "delete" verbs
(either by adding them to the existing verbs list or by adding a separate rule
entry scoped to "keypairs") so tenants can inject and remove SSH keypairs
without RBAC denials.
In `@modules/management/cluster-roles/README.md`:
- Around line 19-20: Update the README table of project-scoped roles to include
the new network-manager entry and describe its permissions, and add a note
documenting the corresponding output network_manager_role_id introduced in
outputs.tf so users can discover the cluster-level binding flow; specifically
add a table row for `network-manager` with scope `project` and a short
description of its responsibilities, and mention the `network_manager_role_id`
output (and where it originates) alongside existing outputs such as
`vm_manager`/`vm-metrics-observer` to keep docs in sync with main.tf and
outputs.tf.
In `@modules/management/harvester-integration/variables.tf`:
- Around line 12-23: The example still passes removed arguments rancher_hostname
and rancher_lb_ip which must be deleted from
modules/management/harvester-integration/examples/basic/main.tf; update that
example's module call to remove rancher_hostname and rancher_lb_ip and instead
include the new inputs variable "cloud_credential_name" (e.g.,
"harvester-local-creds") and "cluster_labels" (a map, e.g., {} or sample labels)
so the module invocation matches the declared variables cloud_credential_name
and cluster_labels; ensure the module input names exactly match those variable
identifiers.
In `@modules/workloads/vm/main.tf`:
- Around line 2-3: The resource count is derived from a sensitive variable
(var.ssh_public_key), which Terraform forbids; add a non-sensitive boolean
switch (e.g., var.create_ssh_key) and use that for the meta-argument instead of
var.ssh_public_key, keeping the sensitive value only as the resource attribute.
Concretely: add a new var.create_ssh_key (default false), change resource
"harvester_ssh_key" "this" { count = var.create_ssh_key ? 1 : 0 } and ensure the
resource still assigns ssh_public_key = var.ssh_public_key; apply the same
change to the second occurrence at the other block referenced in the diff.
- Around line 23-27: Add a configurable boolean variable named wait_for_lease
(default true) in variables.tf and replace the hard-coded wait_for_lease = true
inside the network_interface block in main.tf with wait_for_lease =
var.wait_for_lease so consumers can disable lease waiting when using static
cloud-init network_data without a qemu-guest-agent; update the variable
description to note it should be false for static IPs configured in cloud-init.
In `@modules/workloads/vm/README.md`:
- Around line 6-10: The README incorrectly states that a standalone
harvester_cloudinit_secret resource is created; update the README to reflect
that main.tf only injects a nested cloudinit block into the
harvester_virtualmachine resource (and that a separate
harvester_cloudinit_secret is NOT managed), clarifying when ssh_public_key and
user_data cause nested blocks versus separate resources and adjusting
lifecycle/import guidance and examples accordingly; reference the
harvester_virtualmachine resource and the cloudinit nested block in
modules/workloads/vm/main.tf and remove or reword any mentions of a standalone
harvester_cloudinit_secret to match the implementation.
- Line 60: The README examples use file("~/.ssh/id_rsa.pub") which fails because
Terraform's file() does not expand '~'; update the SSH key examples (the
ssh_public_key assignment instances) to wrap the path with pathexpand so they
read file(pathexpand("~/.ssh/id_rsa.pub")) — replace each occurrence of
file("~/.ssh/id_rsa.pub") accordingly and ensure pathexpand is used wherever the
README shows home-directory SSH path examples.
In `@modules/workloads/vm/versions.tf`:
- Around line 4-6: The harvester provider constraint in the shown versions.tf
(`harvester = { source = "harvester/harvester" version = "~> 1.7" }`) is
incompatible with other examples that pin `~> 0.6.0`; update the provider
version so all modules use a single compatible range — either change this
`version` to "~> 0.6.0" to match the existing examples or update the example
modules to `~> 1.7`; make the change to the `harvester = { ... }` block (or the
example files) so Terraform can merge constraints without conflict.
---
Outside diff comments:
In `@modules/management/cluster-roles/README.md`:
- Around line 63-89: The README shows two separate tenant-space module blocks
(module "tenant_space_iam" and module "tenant_space_iam_observer") which is
misleading because tenant-space accepts multiple group_role_bindings; change the
example to use a single tenant-space module and combine both role entries into
one group_role_bindings list (include entries for var.iam_team_group_id with
module.cluster_roles.vm_manager_role_id and var.sre_group_id with
module.cluster_roles.vm_metrics_observer_role_id) so the tenant/project is
created once and all bindings applied atomically.
---
Nitpick comments:
In `@modules/bootstrap/variables.tf`:
- Around line 156-191: Add validation blocks to require ippool_subnet,
ippool_gateway, ippool_start, and ippool_end when create_lb is true: for each
variable (ippool_subnet, ippool_gateway, ippool_start, ippool_end) add a
validation { condition = !var.create_lb || var.<name> != "" ; error_message =
"<name> is required when create_lb = true." } so Terraform will fail with a
clear message instead of letting the Harvester provider error on empty values;
update the variable blocks for ippool_subnet, ippool_gateway, ippool_start and
ippool_end accordingly.
- Around line 62-90: Add a cross-variable validation that enforces network_name
is non-empty when network_type == "bridge": update the variable "network_name"
(or add a root-level validation/check) to fail with a clear error if
var.network_type == "bridge" and var.network_name == "" (or
trimspace(var.network_name) == ""), or alternatively add a precondition/check
using locals to validate the same for Terraform versions <1.9; reference the
variable blocks "network_type" and "network_name" when implementing the check so
the error message explicitly instructs the user to set network_name for bridge
networks.
In `@modules/identity/rancher-oidc/variables.tf`:
- Around line 53-57: The variable "groups_field" currently allows empty or
whitespace-only strings which can silently break group-to-RBAC mapping; add a
Terraform validation block on variable "groups_field" that enforces non-empty,
non-whitespace values (e.g., trim(var.groups_field) != "" or
length(regexall("\\S", var.groups_field)) > 0) and provide a clear error_message
describing that groups_field must be a non-empty JWT claim name; update the
variable "groups_field" declaration to include this validation so
misconfiguration fails fast.
In `@modules/management/harvester-integration/main.tf`:
- Around line 49-64: Replace the hardcoded chart_version in resource
rancher2_app_v2.harvester with a configurable variable (e.g.,
var.harvester_chart_version) and add a corresponding variable
"harvester_chart_version" to variables.tf with type string, a descriptive
message, and a sensible default of "1.7.1"; update any references to
chart_version in the module to use this variable so the chart can be updated
without changing the code.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3529b180-5787-40c9-a5e0-fa31f266799f
📒 Files selected for processing (17)
modules/bootstrap/main.tfmodules/bootstrap/outputs.tfmodules/bootstrap/variables.tfmodules/identity/providers/asgardeo/main.tfmodules/identity/providers/asgardeo/variables.tfmodules/identity/rancher-oidc/main.tfmodules/identity/rancher-oidc/variables.tfmodules/management/cluster-roles/README.mdmodules/management/cluster-roles/main.tfmodules/management/cluster-roles/outputs.tfmodules/management/harvester-integration/main.tfmodules/management/harvester-integration/variables.tfmodules/workloads/vm/README.mdmodules/workloads/vm/main.tfmodules/workloads/vm/outputs.tfmodules/workloads/vm/variables.tfmodules/workloads/vm/versions.tf
| network_interface { | ||
| name = "nic-1" | ||
| wait_for_lease = true | ||
| network_name = var.network_name | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In the Harvester Terraform provider, does wait_for_lease = truerequire a DHCP/lease allocation, or will it succeed when the guest gets a static IP via cloud-initnetwork_data?
💡 Result:
In the Harvester Terraform provider, wait_for_lease = true in the network_interface block of harvester_virtualmachine requires the guest VM to have qemu-guest-agent installed and running for non-management networks. It waits for the network interface to obtain and report an IP address via the guest agent. For static IPs configured via cloud-init network_data, it will succeed if the qemu-guest-agent is installed, running, and reports the static IP address to the host. The guest agent queries the guest OS's network interfaces directly, detecting static IPs regardless of DHCP. Without the agent, it times out even if the VM has a static IP. Official documentation states: "wait for this network interface to obtain an IP address. If a non-management network is used, this feature requires qemu-guest-agent installed and started in the VM, otherwise, VM creation will stuck until timeout."
Citations:
- 1: https://registry.terraform.io/providers/harvester/harvester/0.6.6/docs/resources/virtualmachine
- 2: [BUG] Harvester Terraform Provider 0.6.5 errors when provisioning VM harvester/harvester#6647
🏁 Script executed:
# Check if the file exists and examine the actual code at the specified lines
fd -p "modules/workloads/vm/main.tf" -x cat -n {} | head -60Repository: wso2/open-cloud-datacenter
Length of output: 1463
🏁 Script executed:
# Check if there are variable definitions and understand the full context
# Look for variables file in the same directory
fd -p "modules/workloads/vm/" -type f -name "*.tf" | xargs wc -l | sort -nRepository: wso2/open-cloud-datacenter
Length of output: 259
🏁 Script executed:
# Find all .tf files in the modules/workloads/vm directory
fd "\.tf$" modules/workloads/vm/Repository: wso2/open-cloud-datacenter
Length of output: 197
🏁 Script executed:
# Check the variables.tf file to see existing variable definitions
cat -n modules/workloads/vm/variables.tfRepository: wso2/open-cloud-datacenter
Length of output: 2304
🏁 Script executed:
# Check the full main.tf to see if there's another network_interface block
cat -n modules/workloads/vm/main.tfRepository: wso2/open-cloud-datacenter
Length of output: 1463
🏁 Script executed:
# Search for any other occurrences of wait_for_lease in the module
rg "wait_for_lease" modules/workloads/vm/Repository: wso2/open-cloud-datacenter
Length of output: 229
🏁 Script executed:
# Check the README for context
cat -n modules/workloads/vm/README.md | head -100Repository: wso2/open-cloud-datacenter
Length of output: 3620
🏁 Script executed:
# Continue reading the README to find the reference to wait_for_lease
cat -n modules/workloads/vm/README.md | tail -n +100Repository: wso2/open-cloud-datacenter
Length of output: 2487
Make lease waiting configurable for static-IP networks.
The module accepts network_data for static IP configuration via cloud-init, but wait_for_lease = true requires qemu-guest-agent to be installed and reporting the network interface. Without the agent, Terraform will timeout waiting for an IP that the guest has already obtained statically. Making this configurable allows disabling the wait when static addressing is used without guest agent reporting.
Suggested fix
network_interface {
name = "nic-1"
- wait_for_lease = true
+ wait_for_lease = var.wait_for_lease
network_name = var.network_name
}Add to variables.tf:
variable "wait_for_lease" {
type = bool
description = "Whether Terraform should wait for an IP lease on the primary NIC. Set to false when using static IPs via cloud-init network_data without guest agent."
default = true
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@modules/workloads/vm/main.tf` around lines 23 - 27, Add a configurable
boolean variable named wait_for_lease (default true) in variables.tf and replace
the hard-coded wait_for_lease = true inside the network_interface block in
main.tf with wait_for_lease = var.wait_for_lease so consumers can disable lease
waiting when using static cloud-init network_data without a qemu-guest-agent;
update the variable description to note it should be false for static IPs
configured in cloud-init.
| harvester = { | ||
| source = "harvester/harvester" | ||
| version = "~> 1.7" |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
How does Terraform resolve conflicting required_providersversion constraints across modules? Will> 0.6.0> 1.7andfor the same provider maketerraform init fail?
💡 Result:
Terraform resolves conflicting required_providers version constraints by collecting all constraints from the root module and every child module for each provider. It then computes the intersection of these constraints to find a single version that satisfies all of them simultaneously. If no such version exists among the available provider releases, terraform init fails with an error like "no available releases match the given constraints", listing the combined constraints. For ~> 0.6.0 (equivalent to >= 0.6.0, < 0.7.0) and ~> 1.7 (equivalent to >= 1.7.0, < 1.8.0), there is no overlapping version since 0.6.x and 1.7.x are in different major versions with no common satisfaction. Thus, terraform init will fail. Use terraform providers (before init, after modules are fetched) to diagnose which modules impose which constraints.
Citations:
- 1: https://developer.hashicorp.com/terraform/language/providers/requirements
- 2: https://www.terraform.io/language/modules/develop/providers
- 3: https://developer.hashicorp.com/terraform/language/expressions/version-constraints
- 4: https://support.hashicorp.com/hc/en-us/articles/42581341921043-Terraform-fails-with-no-available-releases-match-the-given-constraints
- 5: https://stackoverflow.com/questions/76272116/terraform-resolve-no-available-releases-match-the-given-constraints-error
🏁 Script executed:
# Search for the version constraints in the mentioned files
echo "=== Checking modules/workloads/vm/versions.tf ==="
if [ -f "modules/workloads/vm/versions.tf" ]; then
cat -n "modules/workloads/vm/versions.tf" | grep -A 3 -B 1 harvester
else
echo "File not found"
fi
echo -e "\n=== Checking modules/management/networking/examples/basic/main.tf ==="
if [ -f "modules/management/networking/examples/basic/main.tf" ]; then
cat -n "modules/management/networking/examples/basic/main.tf" | grep -A 3 -B 1 harvester
else
echo "File not found"
fi
echo -e "\n=== Checking modules/management/storage/examples/basic/main.tf ==="
if [ -f "modules/management/storage/examples/basic/main.tf" ]; then
cat -n "modules/management/storage/examples/basic/main.tf" | grep -A 3 -B 1 harvester
else
echo "File not found"
fiRepository: wso2/open-cloud-datacenter
Length of output: 1950
Align the Harvester provider constraint with the remaining examples.
This module pins harvester/harvester to ~> 1.7, but the existing examples in modules/management/networking/examples/basic/main.tf and modules/management/storage/examples/basic/main.tf still pin ~> 0.6.0. These ranges (>= 0.6.0, < 0.7.0 and >= 1.7.0, < 1.8.0) are mutually exclusive. Terraform merges version constraints across the module tree, so any root that reuses both will fail terraform init with a "no available releases match the given constraints" error.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@modules/workloads/vm/versions.tf` around lines 4 - 6, The harvester provider
constraint in the shown versions.tf (`harvester = { source =
"harvester/harvester" version = "~> 1.7" }`) is incompatible with other examples
that pin `~> 0.6.0`; update the provider version so all modules use a single
compatible range — either change this `version` to "~> 0.6.0" to match the
existing examples or update the example modules to `~> 1.7`; make the change to
the `harvester = { ... }` block (or the example files) so Terraform can merge
constraints without conflict.
bootstrap/main.tf:
- Add check block: create_ssh_key must be true when create_cloudinit_secret
is true (cloud-init template embeds the generated SSH public key)
- Add check block: existing_cloudinit_secret_name must be non-empty when
create_cloudinit_secret = false
bootstrap/examples/basic/main.tf:
- Replace removed rancher_admin_password with bootstrap_password
management/cluster-roles/main.tf:
- Split harvesterhci.io rule: virtualmachineimages stays read-only;
keypairs gets create+delete so vm-manager can inject/remove SSH keys
via workloads/vm without RBAC denials
management/cluster-roles/README.md:
- Add network-manager to roles table and network_manager_role_id to
outputs table to document the cluster-scoped role
management/harvester-integration/examples/basic/main.tf:
- Remove kubernetes provider (no longer required by module)
- Replace removed rancher_hostname/rancher_lb_ip with cloud_credential_name
and cluster_labels; update module source to wso2 org
management/networking/examples/basic/main.tf:
management/storage/examples/basic/main.tf:
- Bump harvester provider from ~> 0.6.0 to ~> 1.7 to match all other
modules and prevent init failures from conflicting constraints
workloads/vm/main.tf:
- Replace sensitive-derived count with non-sensitive create_ssh_key bool
- Make wait_for_lease configurable via variable (default true)
workloads/vm/variables.tf:
- Add create_ssh_key and wait_for_lease variables
workloads/vm/README.md:
- Fix cloud-init resource description (no standalone secret is created)
- Use file(pathexpand("~/.ssh/...")) in examples (file() does not expand ~)
- Add create_ssh_key and wait_for_lease to inputs table
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
modules/bootstrap/examples/basic/main.tf (1)
37-50:⚠️ Potential issue | 🟠 MajorPoint this example at the local module or use a matching release tag.
This example pulls
modules/bootstrapfrom a remote release (v0.1.0) at line 38, which means it doesn't validate the module code in the current branch. The example should either reference the local module using a relative path or be updated when publishing a new release.🛠️ Proposed fix
module "bootstrap" { - source = "github.com/wso2-enterprise/open-cloud-datacenter//modules/bootstrap?ref=v0.1.0" + source = "../.."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/bootstrap/examples/basic/main.tf` around lines 37 - 50, The module example currently references the remote release in module "bootstrap" which prevents validating current-branch changes; update the module block's source to point to the local module (e.g., a relative path to the modules/bootstrap directory) or change the ref to a matching release tag that corresponds to this branch; modify the module "bootstrap" source value accordingly so examples load the local code during testing and CI, and keep vm_name, node_count, ubuntu_image_id, vm_password and bootstrap_password unchanged.
🧹 Nitpick comments (1)
modules/management/cluster-roles/README.md (1)
65-92: Clarify whether the example binds groups to the same project or creates separate tenant spaces.The usage example shows two separate
tenant-spacemodule calls, which would typically create two distinct projects/namespaces. However, the more common pattern for binding multiple groups to the same project is a single module call with multiplegroup_role_bindingsentries.If the intent is to show binding both the team and observers to the same tenant space:
📝 Alternative pattern for same-project bindings
-# Bind a team group to the vm-manager role in their tenant space -module "tenant_space_iam" { +# Bind both team and observer groups to the same tenant space +module "tenant_space" { source = "github.com/wso2-enterprise/open-cloud-datacenter//modules/management/tenant-space?ref=v0.1.x" ... group_role_bindings = [ { group_principal_id = var.iam_team_group_id role_template_id = "project-member" }, { group_principal_id = var.iam_team_group_id role_template_id = module.cluster_roles.vm_manager_role_id }, - ] -} - -# Bind a read-only observer (e.g. SRE) to metrics-only access -module "tenant_space_iam_observer" { - source = "github.com/wso2-enterprise/open-cloud-datacenter//modules/management/tenant-space?ref=v0.1.x" - ... - group_role_bindings = [ { group_principal_id = var.sre_group_id role_template_id = module.cluster_roles.vm_metrics_observer_role_id }, ] }If the intent is to show creating separate tenant spaces for different teams, add a comment clarifying that each module call provisions a distinct project.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/management/cluster-roles/README.md` around lines 65 - 92, The README example is ambiguous about whether the two tenant-space module invocations create separate projects or bind groups to the same project; update the text to clarify intent by either (a) consolidating into a single module call (module "tenant_space_iam") with multiple group_role_bindings entries to show binding team and observers to the same tenant space, or (b) explicitly stating that each module block (module "tenant_space_iam" and module "tenant_space_iam_observer") provisions a distinct tenant space/project when that is the intended example; mention the specific symbols group_role_bindings, module "tenant_space_iam", and module "tenant_space_iam_observer" so readers know which pattern to follow.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@modules/bootstrap/main.tf`:
- Around line 48-54: The current local cloudinit_secret_name collapses the
generated harvester_cloudinit_secret.cloudinit list to element 0 causing every
harvester_virtualmachine.rancher_server[*] to use the same cloud-init; change to
expose the full list (e.g., cloudinit_secret_names = var.create_cloudinit_secret
? harvester_cloudinit_secret.cloudinit[*].name :
var.existing_cloudinit_secret_name) and update
harvester_virtualmachine.rancher_server to reference the per-VM secret by index
(e.g., use count.index or each.key to pick cloudinit_secret_names[count.index]
when creating each VM) so each VM uses its own rendered cloud-init secret.
- Around line 56-73: The two check blocks ssh_key_required_for_cloudinit and
existing_cloudinit_secret_name_required use Terraform 1.5+ syntax; update the
module's required_version constraint to ">= 1.5" (wherever required_version is
declared in the module) so Terraform 1.5+ is enforced and the file can be
parsed; ensure any documentation or module consumers are updated accordingly.
---
Outside diff comments:
In `@modules/bootstrap/examples/basic/main.tf`:
- Around line 37-50: The module example currently references the remote release
in module "bootstrap" which prevents validating current-branch changes; update
the module block's source to point to the local module (e.g., a relative path to
the modules/bootstrap directory) or change the ref to a matching release tag
that corresponds to this branch; modify the module "bootstrap" source value
accordingly so examples load the local code during testing and CI, and keep
vm_name, node_count, ubuntu_image_id, vm_password and bootstrap_password
unchanged.
---
Nitpick comments:
In `@modules/management/cluster-roles/README.md`:
- Around line 65-92: The README example is ambiguous about whether the two
tenant-space module invocations create separate projects or bind groups to the
same project; update the text to clarify intent by either (a) consolidating into
a single module call (module "tenant_space_iam") with multiple
group_role_bindings entries to show binding team and observers to the same
tenant space, or (b) explicitly stating that each module block (module
"tenant_space_iam" and module "tenant_space_iam_observer") provisions a distinct
tenant space/project when that is the intended example; mention the specific
symbols group_role_bindings, module "tenant_space_iam", and module
"tenant_space_iam_observer" so readers know which pattern to follow.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 5ca7e73e-5ca1-4e41-869e-5516db551255
📒 Files selected for processing (10)
modules/bootstrap/examples/basic/main.tfmodules/bootstrap/main.tfmodules/management/cluster-roles/README.mdmodules/management/cluster-roles/main.tfmodules/management/harvester-integration/examples/basic/main.tfmodules/management/networking/examples/basic/main.tfmodules/management/storage/examples/basic/main.tfmodules/workloads/vm/README.mdmodules/workloads/vm/main.tfmodules/workloads/vm/variables.tf
✅ Files skipped from review due to trivial changes (5)
- modules/management/networking/examples/basic/main.tf
- modules/management/storage/examples/basic/main.tf
- modules/workloads/vm/README.md
- modules/management/cluster-roles/main.tf
- modules/workloads/vm/variables.tf
🚧 Files skipped from review as they are similar to previous changes (1)
- modules/workloads/vm/main.tf
- bootstrap/main.tf: use count.index for per-VM cloud-init secret so
each node in a multi-node cluster boots its own rendered cloud-init
instead of sharing node 0's secret; bump required_version to >= 1.5
to support the check{} blocks added in the previous commit
- bootstrap/examples/basic/main.tf: use relative source path (../..),
avoiding stale remote release tag in branch examples
- bootstrap/variables.tf: add validation for network_name when
network_type=bridge, and for ippool_* when create_lb=true
- identity/rancher-oidc/variables.tf: validate groups_field is
non-empty and whitespace-free to prevent silent auth misconfiguration
- management/cluster-roles/README.md: consolidate two-module usage
example into one tenant-space call with multiple group_role_bindings
- management/harvester-integration: make Harvester UI extension chart
version configurable via harvester_chart_version variable
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
modules/management/harvester-integration/main.tf (1)
97-101:⚠️ Potential issue | 🟠 MajorDo not store raw kubeconfig content in
null_resource.triggers.Lines 98-101 persist
var.harvester_kubeconfigin state viatriggers, which exposes sensitive credentials in plaintext state data.Use a non-secret trigger (e.g., checksum) and pass kubeconfig by path for destroy-time operations, or move this step outside Terraform provisioners.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/management/harvester-integration/main.tf` around lines 97 - 101, The null_resource currently stores sensitive kubeconfig content in its triggers via var.harvester_kubeconfig (see the triggers block on the resource that also uses rancher2_cluster.harvester_hci.cluster_registration_token[0].command); replace storing raw kubeconfig with a non-secret trigger (e.g., compute and store a checksum/sha256 of the kubeconfig file or a timestamp) and modify any provisioner/destroy-time logic to read the kubeconfig from a file path or external data source at runtime instead of from state, or move the destroy-time registration step entirely outside Terraform so credentials are not persisted in null_resource.triggers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@modules/bootstrap/variables.tf`:
- Around line 161-170: The no-LB branch can emit an empty endpoint because
variable "static_rancher_ip" defaults to "" and outputs.rancher_lb_ip falls back
to it when "create_lb" is false; add a variable validation on
"static_rancher_ip" that requires a non-empty string when "create_lb" is false
(e.g. a validation rule referencing var.create_lb and var.static_rancher_ip),
and update the error message to clearly state that create_lb = false requires a
valid static_rancher_ip (and, if applicable, require that the chosen network
mode is bridge when create_lb is false).
- Around line 129-157: Add cross-variable validations so the module rejects
incomplete configs: when create_cloudinit_secret is true, require vm_password
and bootstrap_password to be non-empty; when create_cloudinit_secret is false,
require existing_cloudinit_secret_name to be non-empty. Implement this by adding
validation blocks to the variables existing_cloudinit_secret_name, vm_password,
and bootstrap_password that reference var.create_cloudinit_secret and assert
length(var.existing_cloudinit_secret_name) > 0 when var.create_cloudinit_secret
== false, and length(var.vm_password) > 0 and length(var.bootstrap_password) > 0
when var.create_cloudinit_secret == true, supplying clear error messages
referencing the variable names.
- Around line 84-87: The module uses cross-variable validations inside variable
blocks (e.g., the validation referencing var.network_type, var.network_name,
var.create_lb) which requires Terraform >=1.9, but the module currently declares
required_version = ">= 1.5"; either raise the Terraform floor by updating
required_version to ">= 1.9" in the module's main.tf (symbol: required_version)
or remove the cross-variable checks from the variable blocks in variables.tf and
implement them as top-level validation/check blocks (e.g., a terraform
validate-style check or a root-level locals + null_resource/validation pattern)
so that variable blocks (network_name, create_lb) no longer reference other vars
directly.
In `@modules/management/harvester-integration/main.tf`:
- Around line 86-89: The labels merge currently allows var.cluster_labels to
override the required Harvester provider label; change the merge call so the
fixed label wins by placing the fixed map last (e.g., use
merge(var.cluster_labels, { "provider.cattle.io" = "harvester" })) so the
"provider.cattle.io" = "harvester" entry in the labels attribute cannot be
overridden; update the labels assignment where merge(...) is used to ensure the
required label is always applied.
In `@modules/management/harvester-integration/variables.tf`:
- Around line 13-17: The variable block for cloud_credential_name currently
allows empty/whitespace values; update the variable "cloud_credential_name" to
include a validation block that rejects empty or all-whitespace names by
checking length(trim(var.cloud_credential_name)) > 0 and providing a clear
error_message; this enforces failure at plan time and prevents downstream
provider/API errors when using cloud_credential_name.
---
Outside diff comments:
In `@modules/management/harvester-integration/main.tf`:
- Around line 97-101: The null_resource currently stores sensitive kubeconfig
content in its triggers via var.harvester_kubeconfig (see the triggers block on
the resource that also uses
rancher2_cluster.harvester_hci.cluster_registration_token[0].command); replace
storing raw kubeconfig with a non-secret trigger (e.g., compute and store a
checksum/sha256 of the kubeconfig file or a timestamp) and modify any
provisioner/destroy-time logic to read the kubeconfig from a file path or
external data source at runtime instead of from state, or move the destroy-time
registration step entirely outside Terraform so credentials are not persisted in
null_resource.triggers.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c45fa320-07d0-4e94-bc10-4110256cf949
📒 Files selected for processing (7)
modules/bootstrap/examples/basic/main.tfmodules/bootstrap/main.tfmodules/bootstrap/variables.tfmodules/identity/rancher-oidc/variables.tfmodules/management/cluster-roles/README.mdmodules/management/harvester-integration/main.tfmodules/management/harvester-integration/variables.tf
🚧 Files skipped from review as they are similar to previous changes (3)
- modules/identity/rancher-oidc/variables.tf
- modules/bootstrap/examples/basic/main.tf
- modules/bootstrap/main.tf
| validation { | ||
| condition = var.network_type != "bridge" || var.network_name != "" | ||
| error_message = "network_name is required when network_type = 'bridge'." | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
printf 'Declared Terraform version(s) under modules/bootstrap:\n'
rg -n --type=tf 'required_version' modules/bootstrap
printf '\nCross-variable validation rules in modules/bootstrap/variables.tf:\n'
rg -n --type=tf 'condition\s*=.*var\.(network_type|create_lb)' modules/bootstrap/variables.tfRepository: wso2/open-cloud-datacenter
Length of output: 739
🌐 Web query:
In Terraform, when was cross-variable input validation added? Specifically, do Terraform 1.5, 1.6, 1.7, or 1.8 allow a variableblock'svalidationcondition to reference another variable likevar.create_lborvar.network_type?
💡 Result:
Cross-variable input validation in Terraform variable blocks was added in Terraform 1.9. Versions 1.5, 1.6, 1.7, and 1.8 do not support referencing other variables like var.create_lb or var.network_type in a variable's validation condition. Prior to 1.9, validation conditions could only reference the variable being validated itself.
Citations:
- 1: Allow variable validation conditions to refer to other variables hashicorp/terraform#25609
- 2: Cross-reference Input Variable Validation error hashicorp/terraform#35455
- 3: https://mattias.engineer/blog/2024/terraform-variable-cross-validation/
- 4: https://www.hashicorp.com/en/blog/terraform-1-9-enhances-input-variable-validations
- 5: https://nedinthecloud.com/2024/07/08/variable-validation-improvements-in-terraform-1.9/
- 6: https://spacelift.io/blog/terraform-variable-validation
- 7: https://mattias.engineer/blog/2024/terraform-variable-cross-validation
- 8: terraform: Stabilize the variable_validation_crossref experiment hashicorp/terraform#34955
Cross-variable validations require Terraform 1.9+ but module declares support for 1.5.
The validation conditions at lines 85 and 178–208 reference other input variables (var.network_type, var.create_lb) from inside variable blocks. This feature was added in Terraform 1.9. However, modules/bootstrap/main.tf declares required_version = ">= 1.5", which does not support cross-variable input validation. Raise the Terraform floor to >= 1.9 or move these guards into top-level check blocks.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@modules/bootstrap/variables.tf` around lines 84 - 87, The module uses
cross-variable validations inside variable blocks (e.g., the validation
referencing var.network_type, var.network_name, var.create_lb) which requires
Terraform >=1.9, but the module currently declares required_version = ">= 1.5";
either raise the Terraform floor by updating required_version to ">= 1.9" in the
module's main.tf (symbol: required_version) or remove the cross-variable checks
from the variable blocks in variables.tf and implement them as top-level
validation/check blocks (e.g., a terraform validate-style check or a root-level
locals + null_resource/validation pattern) so that variable blocks
(network_name, create_lb) no longer reference other vars directly.
| variable "create_cloudinit_secret" { | ||
| type = bool | ||
| description = "If true, render and create a cloud-init secret from the built-in template. Set false to reference an existing secret." | ||
| default = true | ||
| } | ||
|
|
||
| variable "existing_cloudinit_secret_name" { | ||
| type = string | ||
| description = "Harvester ID of the Ubuntu Cloud Image" | ||
| description = "Name of an existing cloud-init secret to attach when create_cloudinit_secret = false." | ||
| default = "" | ||
| } | ||
|
|
||
| variable "vm_password" { | ||
| type = string | ||
| description = "Default password for the ubuntu user" | ||
| description = "Default password for the ubuntu user (used in cloud-init template). Required when create_cloudinit_secret = true." | ||
| sensitive = true | ||
| default = "" | ||
| } | ||
|
|
||
| variable "rancher_hostname" { | ||
| type = string | ||
| description = "FQDN for the Rancher UI" | ||
| description = "FQDN for the Rancher UI (e.g. 'rancher-lk-prod.wso2.com')" | ||
| } | ||
|
|
||
| variable "bootstrap_password" { | ||
| type = string | ||
| description = "Temporary password set by the Rancher Helm chart during cloud-init install" | ||
| description = "Temporary Rancher admin password set by the Helm chart during cloud-init install. Required when create_cloudinit_secret = true." | ||
| sensitive = true | ||
| default = "" |
There was a problem hiding this comment.
The cloud-init switch still allows blank “required” inputs.
The descriptions on Lines 131, 143, and 155 say these values are required in opposite branches, but all three defaults are empty strings. That lets the default greenfield path (create_cloudinit_secret = true) and the brownfield reuse path (false) accept configurations the module itself considers incomplete. Please add a guard so one side or the other must be populated.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@modules/bootstrap/variables.tf` around lines 129 - 157, Add cross-variable
validations so the module rejects incomplete configs: when
create_cloudinit_secret is true, require vm_password and bootstrap_password to
be non-empty; when create_cloudinit_secret is false, require
existing_cloudinit_secret_name to be non-empty. Implement this by adding
validation blocks to the variables existing_cloudinit_secret_name, vm_password,
and bootstrap_password that reference var.create_cloudinit_secret and assert
length(var.existing_cloudinit_secret_name) > 0 when var.create_cloudinit_secret
== false, and length(var.vm_password) > 0 and length(var.bootstrap_password) > 0
when var.create_cloudinit_secret == true, supplying clear error messages
referencing the variable names.
| variable "create_lb" { | ||
| type = bool | ||
| description = "If true, create a Harvester LoadBalancer and IP pool to expose Rancher. Set false when the VM is directly reachable via a bridge IP." | ||
| default = true | ||
| } | ||
|
|
||
| variable "static_rancher_ip" { | ||
| type = string | ||
| description = "Permanent admin password to configure on the Rancher instance" | ||
| sensitive = true | ||
| description = "IP of the Rancher VM on the internal bridge network when create_lb = false. Passed through as rancher_lb_ip output for CoreDNS." | ||
| default = "" |
There was a problem hiding this comment.
The no-LB branch can still return an empty Rancher endpoint.
In modules/bootstrap/outputs.tf Lines 1-13, rancher_lb_ip falls back to var.static_rancher_ip when create_lb = false. Because Lines 167-170 still default that value to "" with no guard, the direct-bridge path can emit an empty endpoint. Please require a non-empty static_rancher_ip in the no-LB branch; if create_lb = false is only supported with bridge networking, guard that combination here too.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@modules/bootstrap/variables.tf` around lines 161 - 170, The no-LB branch can
emit an empty endpoint because variable "static_rancher_ip" defaults to "" and
outputs.rancher_lb_ip falls back to it when "create_lb" is false; add a variable
validation on "static_rancher_ip" that requires a non-empty string when
"create_lb" is false (e.g. a validation rule referencing var.create_lb and
var.static_rancher_ip), and update the error message to clearly state that
create_lb = false requires a valid static_rancher_ip (and, if applicable,
require that the chosen network mode is bridge when create_lb is false).
| labels = merge( | ||
| { "provider.cattle.io" = "harvester" }, | ||
| var.cluster_labels, | ||
| ) |
There was a problem hiding this comment.
Prevent cluster_labels from overriding the required Harvester provider label.
Line 86 currently lets callers override "provider.cattle.io" = "harvester" by passing the same key in var.cluster_labels. That can break cluster classification/behavior.
Suggested patch
labels = merge(
- { "provider.cattle.io" = "harvester" },
- var.cluster_labels,
+ var.cluster_labels,
+ { "provider.cattle.io" = "harvester" },
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| labels = merge( | |
| { "provider.cattle.io" = "harvester" }, | |
| var.cluster_labels, | |
| ) | |
| labels = merge( | |
| var.cluster_labels, | |
| { "provider.cattle.io" = "harvester" }, | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@modules/management/harvester-integration/main.tf` around lines 86 - 89, The
labels merge currently allows var.cluster_labels to override the required
Harvester provider label; change the merge call so the fixed label wins by
placing the fixed map last (e.g., use merge(var.cluster_labels, {
"provider.cattle.io" = "harvester" })) so the "provider.cattle.io" = "harvester"
entry in the labels attribute cannot be overridden; update the labels assignment
where merge(...) is used to ensure the required label is always applied.
| variable "cloud_credential_name" { | ||
| type = string | ||
| description = "The FQDN of the Rancher server" | ||
| description = "Display name for the Harvester cloud credential in Rancher" | ||
| default = "harvester-local-creds" | ||
| } |
There was a problem hiding this comment.
Add validation to reject empty cloud_credential_name.
Line 13 currently accepts empty/whitespace values, which can fail later at provider/API time. Fail fast at plan time.
Suggested patch
variable "cloud_credential_name" {
type = string
description = "Display name for the Harvester cloud credential in Rancher"
default = "harvester-local-creds"
+ validation {
+ condition = trimspace(var.cloud_credential_name) != ""
+ error_message = "cloud_credential_name must not be empty."
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| variable "cloud_credential_name" { | |
| type = string | |
| description = "The FQDN of the Rancher server" | |
| description = "Display name for the Harvester cloud credential in Rancher" | |
| default = "harvester-local-creds" | |
| } | |
| variable "cloud_credential_name" { | |
| type = string | |
| description = "Display name for the Harvester cloud credential in Rancher" | |
| default = "harvester-local-creds" | |
| validation { | |
| condition = trimspace(var.cloud_credential_name) != "" | |
| error_message = "cloud_credential_name must not be empty." | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@modules/management/harvester-integration/variables.tf` around lines 13 - 17,
The variable block for cloud_credential_name currently allows empty/whitespace
values; update the variable "cloud_credential_name" to include a validation
block that rejects empty or all-whitespace names by checking
length(trim(var.cloud_credential_name)) > 0 and providing a clear error_message;
this enforces failure at plan time and prevents downstream provider/API errors
when using cloud_credential_name.
bootstrap/main.tf:
- Add check block: create_ssh_key must be true when create_cloudinit_secret
is true (cloud-init template embeds the generated SSH public key)
- Add check block: existing_cloudinit_secret_name must be non-empty when
create_cloudinit_secret = false
bootstrap/examples/basic/main.tf:
- Replace removed rancher_admin_password with bootstrap_password
management/cluster-roles/main.tf:
- Split harvesterhci.io rule: virtualmachineimages stays read-only;
keypairs gets create+delete so vm-manager can inject/remove SSH keys
via workloads/vm without RBAC denials
management/cluster-roles/README.md:
- Add network-manager to roles table and network_manager_role_id to
outputs table to document the cluster-scoped role
management/harvester-integration/examples/basic/main.tf:
- Remove kubernetes provider (no longer required by module)
- Replace removed rancher_hostname/rancher_lb_ip with cloud_credential_name
and cluster_labels; update module source to wso2 org
management/networking/examples/basic/main.tf:
management/storage/examples/basic/main.tf:
- Bump harvester provider from ~> 0.6.0 to ~> 1.7 to match all other
modules and prevent init failures from conflicting constraints
workloads/vm/main.tf:
- Replace sensitive-derived count with non-sensitive create_ssh_key bool
- Make wait_for_lease configurable via variable (default true)
workloads/vm/variables.tf:
- Add create_ssh_key and wait_for_lease variables
workloads/vm/README.md:
- Fix cloud-init resource description (no standalone secret is created)
- Use file(pathexpand("~/.ssh/...")) in examples (file() does not expand ~)
- Add create_ssh_key and wait_for_lease to inputs table
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Summary
This PR brings a collection of production-readiness and brownfield-import improvements across five modules, plus a new
workloads/vmmodule.modules/bootstrap — brownfield support
network_typevariable (masquerade|bridge) with associatednetwork_interface_name,network_name,network_mac_address. Enables attaching existing production VMs that use direct VLAN bridge networking.create_ssh_key = false+ssh_key_idslets callers attach existing Harvester SSH keys instead of generating new ones.create_cloudinit_secret = false+existing_cloudinit_secret_namefor brownfield VMs whose cloud-init already ran at initial provision time.vm_cpu,vm_disk_name,vm_disk_size,vm_disk_auto_deleteto match the exact spec of an imported VM.enable_usb_tabletflag for console cursor behaviour.create_lb = false+static_rancher_ipfor VMs directly reachable via bridge IP (no dedicated LoadBalancer/IPPool required).rancher2_bootstrapresource andrancher/rancher2fromrequired_providers— callers handle Rancher password setup in a separate phase.modules/management/harvester-integration — brownfield skip flags
rancher2_settingwithrancher2_featurefor Harvester feature flags.manage_feature_flags,create_cloud_credential,apply_registration,manage_appskip flags — set any tofalsefor resources that already exist.hashicorp/kubernetesprovider dependency and thekubernetes_config_map_v1_dataCoreDNS patch (CoreDNS is now patched via a separate mechanism outside this module).rancher_hostname/rancher_lb_ipwithcloud_credential_nameandcluster_labels.harvester-extensions→harvester(matches Rancher's built-in name).modules/management/cluster-roles — vm-manager role
rancher2_role_template.vm_manager: full VM lifecycle within a tenant project (create, update, delete, power ops, console/VNC, data volumes). Intended for product team members who own their VMs end-to-end.modules/identity/providers/asgardeo — configurable claims
requested_claimsvariable (list of claim URI objects) with sensible defaults (email, username, groups) for Rancher RBAC.modules/identity/rancher-oidc — configurable groups field
groups_fieldvariable to set the JWT claim name for group memberships (default:"groups").modules/workloads/vm — new module
Test plan
terraform init && terraform validatepasses for bootstrap, harvester-integration, cluster-roles, identity/*, and workloads/vm modulesterraform planon 00-bootstrap withcreate_ssh_key=false,create_cloudinit_secret=false,create_lb=false,network_type=bridgeshows no spurious changes against a brownfield-imported stateterraform planon 02-management withmanage_feature_flags=false,create_cloud_credential=false,apply_registration=false,manage_app=falseshows no changes against an already-integrated Harvester clustervm-managerrole template appears in Rancher UI and can be bound to a group in a tenant project🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Enhancements
Documentation