Terraform module to deploy a Talos Kubernetes cluster on Turing Pi 2.5 nodes using the native Talos Terraform Provider.
module "cluster" {
source = "jfreed-dev/modules/turingpi//modules/talos-cluster"
version = ">= 1.3.0"
cluster_name = "my-cluster"
cluster_endpoint = "https://192.168.1.101:6443"
# Pin versions to match your Talos image
talos_version = "v1.9.2"
kubernetes_version = "v1.32.1"
control_plane = [
{ host = "192.168.1.101", hostname = "cp1" }
]
workers = [
{ host = "192.168.1.102", hostname = "worker1" },
{ host = "192.168.1.103", hostname = "worker2" },
{ host = "192.168.1.104", hostname = "worker3" }
]
kubeconfig_path = "./kubeconfig"
}module "cluster" {
source = "jfreed-dev/modules/turingpi//modules/talos-cluster"
version = ">= 1.3.0"
cluster_name = "my-cluster"
cluster_endpoint = "https://192.168.1.101:6443"
talos_version = "v1.9.2"
kubernetes_version = "v1.32.1"
control_plane = [{ host = "192.168.1.101" }]
workers = [
{ host = "192.168.1.102" },
{ host = "192.168.1.103" },
{ host = "192.168.1.104" }
]
# Enable NVMe storage for Longhorn
nvme_storage_enabled = true
nvme_device = "/dev/nvme0n1"
nvme_mountpoint = "/var/mnt/longhorn"
nvme_control_plane = true # Also configure NVMe on control plane
kubeconfig_path = "./kubeconfig"
talosconfig_path = "./talosconfig"
}| Name | Version |
|---|---|
| terraform | >= 1.0 |
| local | >= 2.0 |
| talos | >= 0.7 |
| Name | Version |
|---|---|
| local | >= 2.0 |
| talos | >= 0.7 |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| cluster_endpoint | Kubernetes API endpoint (https://IP:6443) | string |
n/a | yes |
| cluster_name | Name of the Kubernetes cluster | string |
n/a | yes |
| control_plane | Control plane node configurations | list(object({ |
n/a | yes |
| controlplane_patches | Config patches for control plane nodes (YAML strings) | list(string) |
[] |
no |
| kubeconfig_path | Path to write kubeconfig file (optional) | string |
null |
no |
| kubernetes_version | Kubernetes version (e.g., 'v1.32.1'). Must be compatible with the Talos version. | string |
null |
no |
| nvme_control_plane | Configure NVMe on control plane nodes (in addition to workers) | bool |
true |
no |
| nvme_device | NVMe device path | string |
"/dev/nvme0n1" |
no |
| nvme_mountpoint | Mount point for NVMe storage | string |
"/var/mnt/longhorn" |
no |
| nvme_storage_enabled | Enable NVMe storage configuration for Longhorn | bool |
false |
no |
| talos_version | Talos version for config generation (e.g., 'v1.11.6'). Must match the Talos image on nodes. | string |
null |
no |
| talosconfig_path | Path to write talosconfig file (optional) | string |
null |
no |
| worker_patches | Config patches for worker nodes (YAML strings) | list(string) |
[] |
no |
| workers | Worker node configurations | list(object({ |
[] |
no |
| Name | Description |
|---|---|
| client_configuration | Talos client configuration for talosctl |
| cluster_endpoint | Kubernetes API endpoint |
| cluster_name | Cluster name |
| kubeconfig | Kubeconfig for cluster access |
| kubeconfig_path | Path to kubeconfig file (if written) |
| machine_secrets | Talos machine secrets (for backup) |
| nvme_enabled | Whether NVMe storage is configured |
| nvme_mountpoint | NVMe mount point (if enabled) |
When nvme_storage_enabled = true, the module automatically generates Talos machine configuration patches to:
- Partition the NVMe device
- Mount it at the specified mountpoint
- Make it available for Longhorn distributed storage
This is equivalent to applying the following Talos config patch:
machine:
disks:
- device: /dev/nvme0n1
partitions:
- mountpoint: /var/mnt/longhornAfter enabling NVMe storage, configure Longhorn to use it:
module "longhorn" {
source = "jfreed-dev/modules/turingpi//modules/addons/longhorn"
version = ">= 1.3.0"
depends_on = [module.cluster]
default_data_path = "/var/mnt/longhorn"
create_nvme_storage_class = true
nvme_replica_count = 2
}Some addon modules require Talos system extensions. Without these extensions, certain features won't work:
| Addon Module | Required Extension | Purpose |
|---|---|---|
longhorn |
siderolabs/iscsi-tools |
iSCSI support for distributed storage |
longhorn (NFS) |
siderolabs/nfs-utils |
NFSv3 file locking support |
| VMs | siderolabs/qemu-guest-agent |
QEMU guest agent service |
Use the Talos Image Factory to create custom images with extensions:
# Create a schematic with required extensions
curl -X POST https://factory.talos.dev/schematics \
-H "Content-Type: application/yaml" \
--data-binary @- << 'EOF'
customization:
systemExtensions:
officialExtensions:
- siderolabs/iscsi-tools
- siderolabs/util-linux-tools
EOF
# Response: {"id":"613e1592b2da41ae5e265e8789429f22e121aab91cb4deb6bc3c0b6262961245"}
# Download the image for ARM64 (Turing RK1)
curl -LO "https://factory.talos.dev/image/613e1592b2da41ae5e265e8789429f22e121aab91cb4deb6bc3c0b6262961245/v1.12.1/metal-arm64.raw.xz"| Extensions | Schematic ID |
|---|---|
| iscsi-tools + util-linux-tools | 613e1592b2da41ae5e265e8789429f22e121aab91cb4deb6bc3c0b6262961245 |
Use these IDs to download images directly:
https://factory.talos.dev/image/{SCHEMATIC_ID}/{TALOS_VERSION}/metal-arm64.raw.xz
talosctl get extensions --nodes <NODE_IP>Use the included wipe script to cleanly wipe all drives, shutdown nodes, and verify power off via the TuringPi BMC:
# Dry run (shows commands without executing)
./scripts/talos-wipe.sh \
--talosconfig ./talosconfig \
--nodes 10.10.88.74,10.10.88.75,10.10.88.76 \
--bmc 10.10.88.70 \
--dry-run
# Execute wipe workflow
./scripts/talos-wipe.sh \
--talosconfig ./talosconfig \
--nodes 10.10.88.74,10.10.88.75,10.10.88.76 \
--bmc 10.10.88.70
# Skip NVMe wipe (only wipe system partitions)
./scripts/talos-wipe.sh \
--talosconfig ./talosconfig \
--nodes 10.10.88.74,10.10.88.75,10.10.88.76 \
--bmc 10.10.88.70 \
--no-nvmeThe script will:
- Wipe STATE and EPHEMERAL partitions
- Optionally wipe user disks (NVMe)
- Shutdown all nodes
- Verify power off via BMC API
# Reset and return to maintenance mode (wipes cluster state)
talosctl reset --nodes <NODE_IP> --graceful=false --reboot \
--system-labels-to-wipe STATE \
--system-labels-to-wipe EPHEMERAL
# Also wipe user data disks (NVMe, etc.)
talosctl reset --nodes <NODE_IP> --graceful=false --reboot \
--system-labels-to-wipe STATE \
--system-labels-to-wipe EPHEMERAL \
--user-disks-to-wipe /dev/nvme0n1
# Reset multiple nodes at once
talosctl reset --nodes 10.10.88.74,10.10.88.75,10.10.88.76 \
--graceful=false --reboot \
--system-labels-to-wipe STATE \
--system-labels-to-wipe EPHEMERALNote: After reset, nodes enter maintenance mode. To install a different OS, re-flash via the TuringPi BMC.
Nodes must be pre-flashed with Talos Linux. Use the flash-nodes module or the turingpi_flash resource.
Apache 2.0 - See LICENSE for details.