Skip to content
Open
Show file tree
Hide file tree
Changes from 26 commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
43c9cb4
Update variables.tf
garry-t Jun 20, 2025
7c15422
Update agents.tf
garry-t Jun 20, 2025
3336faf
Update locals.tf
garry-t Jun 20, 2025
574df9e
Update agents.tf
garry-t Jun 20, 2025
4f77bb5
Update locals.tf
garry-t Jun 20, 2025
d68210e
Update agents.tf
garry-t Jun 20, 2025
785d476
Update variables.tf
garry-t Jun 20, 2025
fffc5bd
Update agents.tf
garry-t Jul 27, 2025
e4c33b3
Update variables.tf
garry-t Jul 27, 2025
8ad2141
Update variables.tf
garry-t Aug 7, 2025
94b77e6
Create customize-mount-path-longhorn.md
garry-t Aug 7, 2025
0ae30f0
Update kube.tf.example
garry-t Aug 13, 2025
54af473
- fix typo
garry-t Oct 10, 2025
b7ef617
- gemini suggest fix doc
garry-t Oct 11, 2025
968772b
- gemini suggest fix comment
garry-t Oct 11, 2025
e720ea9
- gemini fix validation
garry-t Oct 11, 2025
cb6f68c
- gemini yet another fixes
garry-t Oct 11, 2025
66d9676
- gemini mountpoint checks
garry-t Oct 11, 2025
012c59d
Merge branch 'master' into mount_path
garry-t Oct 23, 2025
7bfada4
Merge branch 'master' into mount_path
garry-t Oct 24, 2025
aa01c5d
- gemini added "set -e"
garry-t Oct 24, 2025
3b517e2
- gemini fix review
garry-t Oct 27, 2025
889ae77
- provide idempotent mount command
garry-t Oct 28, 2025
42a32cd
- fix one more gemini review comment
garry-t Oct 28, 2025
436224a
- improved regex one more time
garry-t Oct 28, 2025
b57b819
Update docs/customize-mount-path-longhorn.md
garry-t Oct 28, 2025
8fdce26
- improve validation variable
garry-t Oct 28, 2025
a9dcac1
- satisfy gemini
garry-t Oct 28, 2025
fe22dbd
Update docs/customize-mount-path-longhorn.md
garry-t Oct 28, 2025
0d58ced
- gemini yet another fix
garry-t Oct 28, 2025
1159886
- gemini yet another validation fix
garry-t Oct 29, 2025
2befe18
- gemini fix doc..
garry-t Oct 29, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 6 additions & 3 deletions agents.tf
Original file line number Diff line number Diff line change
Expand Up @@ -192,10 +192,13 @@ resource "null_resource" "configure_longhorn_volume" {
# Start the k3s agent and wait for it to have started
provisioner "remote-exec" {
inline = [
"mkdir /var/longhorn >/dev/null 2>&1",
"mount -o discard,defaults ${hcloud_volume.longhorn_volume[each.key].linux_device} /var/longhorn",
"set -e",
"mkdir -p '${each.value.longhorn_mount_path}' >/dev/null",
"mountpoint -q '${each.value.longhorn_mount_path}' || mount -o discard,defaults ${hcloud_volume.longhorn_volume[each.key].linux_device} '${each.value.longhorn_mount_path}'",
"${var.longhorn_fstype == "ext4" ? "resize2fs" : "xfs_growfs"} ${hcloud_volume.longhorn_volume[each.key].linux_device}",
"echo '${hcloud_volume.longhorn_volume[each.key].linux_device} /var/longhorn ${var.longhorn_fstype} discard,nofail,defaults 0 0' >> /etc/fstab"
# Match any non-comment line (^[^#]) with any first field, followed by a space and your mount path in the second column.
# This prevents false positives like /data matching /data1.
"awk -v path='${each.value.longhorn_mount_path}' '$0 !~ /^#/ && $2 == path { found=1; exit } END { exit !found }' /etc/fstab || echo '${hcloud_volume.longhorn_volume[each.key].linux_device} ${each.value.longhorn_mount_path} ${var.longhorn_fstype} discard,nofail,defaults 0 0' | tee -a /etc/fstab >/dev/null"
]
}

Expand Down
108 changes: 108 additions & 0 deletions docs/customize-mount-path-longhorn.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
## How to use a custom mount path for Longhorn
<hr>

In order to use NVMe and external disks with Longhorn, you may need to mount an external disk to a location other than the default under the `/var/` folder. This can provide more storage capacity across the cluster, especially if you haven't disabled the default Longhorn disks.

> ⚠️ Note: You can set any mount path, but it must be within the `/var/` folder.
### How to set a custom mount path for your external disk?

1. You must enable Longhorn in your module.
```terraform
enable_longhorn = true
```
2. Set the Helm values for Longhorn. The `defaultDataPath` is important as this path is automatically created by Longhorn and will be the default storage class pointing to your primary disks (e.g., NVMe).
```yaml
longhorn_values = <<EOT
defaultSettings:
nodeDrainPolicy: allow-if-replica-is-stopped
defaultDataPath: /var/longhorn
persistence:
defaultFsType: ext4
defaultClassReplicaCount: 3
defaultClass: true
EOT
```
3. In the `agent_nodepools` where you want to have a customized mount path, set the `longhorn_mount_path` variable.
```terraform
agent_nodepools = [
{
# ... other nodepool configuration
labels = ["role=monitoring", "storage=ssd"], # Label we use to filter nodes
longhorn_volume_size = 50,
longhorn_mount_path = "/var/lib/longhorn" # This is the custom path
}
]
```
4. Apply the changes. As a result, your external disks will be mounted to `/var/lib/longhorn`.
### How to configure Longhorn to use the new path?
After setting the custom mount path, you need to configure Longhorn to recognize and use it. This typically involves:
1. Patching the Longhorn nodes to add the new disk.
2. Creating a new StorageClass that uses the new disk.
Here is an example of how you can achieve this with Terraform:
```terraform
# Find the nodes with the 'ssd' storage label
data "kubernetes_nodes" "ssd_nodes" {
metadata {
labels = {
"storage" = "ssd"
}
}
}
# Patch the selected Longhorn nodes to add the new disk
resource "null_resource" "longhorn_patch_external_disk" {
for_each = {
for node in data.kubernetes_nodes.ssd_nodes.nodes : node.metadata[0].name => node.metadata[0].name
}
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = <<-EOT
KUBECONFIG=${var.kubeconfig_path} kubectl -n longhorn-system patch nodes.longhorn.io ${each.key} --type merge -p '{
"spec": {
"disks": {
"external-ssd": {
"path": "/var/lib/longhorn", # The path you set in the nodepools variable
"allowScheduling": true,
"tags": ["ssd"]
}
}
}
}'
EOT
}
}
# Create a new StorageClass for the SSD-backed Longhorn storage
resource "kubernetes_manifest" "longhorn_ssd_storageclass" {
manifest = {
apiVersion = "storage.k8s.io/v1"
kind = "StorageClass"
metadata = {
name = "longhorn-ssd"
}
provisioner = "driver.longhorn.io"
parameters = {
numberOfReplicas = "3"
staleReplicaTimeout = "30"
diskSelector = "ssd"
fromBackup = ""
}
reclaimPolicy = "Delete"
allowVolumeExpansion = true
volumeBindingMode = "Immediate"
}
depends_on = [null_resource.longhorn_patch_external_disk]
}
4 changes: 3 additions & 1 deletion kube.tf.example
Original file line number Diff line number Diff line change
Expand Up @@ -233,7 +233,9 @@ module "kube-hetzner" {
# Something worth noting is that Volume storage is slower than node storage, which is achieved by not mentioning longhorn_volume_size or setting it to 0.
# So for something like DBs, you definitely want node storage, for other things like backups, volume storage is fine, and cheaper.
# longhorn_volume_size = 20

# Set any path inside /var/ folder to have the ability to use an additional storage class along with the default one, which by default must be set in helm values to
# /var/longhorn
# longhorn_mount_path = "/var/lib/longhorn"
# Enable automatic backups via Hetzner (default: false)
# backups = true
},
Expand Down
2 changes: 2 additions & 0 deletions locals.tf
Original file line number Diff line number Diff line change
Expand Up @@ -259,6 +259,7 @@ locals {
nodepool_name : nodepool_obj.name,
server_type : nodepool_obj.server_type,
longhorn_volume_size : coalesce(nodepool_obj.longhorn_volume_size, 0),
longhorn_mount_path : nodepool_obj.longhorn_mount_path,
floating_ip : lookup(nodepool_obj, "floating_ip", false),
floating_ip_rdns : lookup(nodepool_obj, "floating_ip_rdns", false),
location : nodepool_obj.location,
Expand Down Expand Up @@ -288,6 +289,7 @@ locals {
nodepool_name : nodepool_obj.name,
server_type : nodepool_obj.server_type,
longhorn_volume_size : coalesce(nodepool_obj.longhorn_volume_size, 0),
longhorn_mount_path : nodepool_obj.longhorn_mount_path,
floating_ip : lookup(nodepool_obj, "floating_ip", false),
floating_ip_rdns : lookup(nodepool_obj, "floating_ip_rdns", false),
location : nodepool_obj.location,
Expand Down
18 changes: 18 additions & 0 deletions variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -242,6 +242,7 @@ variable "agent_nodepools" {
labels = list(string)
taints = list(string)
longhorn_volume_size = optional(number)
longhorn_mount_path = optional(string, "/var/longhorn")
swap_size = optional(string, "")
zram_size = optional(string, "")
kubelet_args = optional(list(string), ["kube-reserved=cpu=50m,memory=300Mi,ephemeral-storage=1Gi", "system-reserved=cpu=250m,memory=300Mi"])
Expand All @@ -261,6 +262,7 @@ variable "agent_nodepools" {
labels = optional(list(string))
taints = optional(list(string))
longhorn_volume_size = optional(number)
longhorn_mount_path = optional(string, null)
swap_size = optional(string, "")
zram_size = optional(string, "")
kubelet_args = optional(list(string), ["kube-reserved=cpu=50m,memory=300Mi,ephemeral-storage=1Gi", "system-reserved=cpu=250m,memory=300Mi"])
Expand Down Expand Up @@ -303,6 +305,22 @@ variable "agent_nodepools" {
error_message = "Hetzner does not support networks with more than 100 servers."
}

validation {
condition = alltrue(flatten([
for np in var.agent_nodepools : concat(
[
can(regex("^/var/$|^/var/([a-zA-Z0-9._-]+(/[a-zA-Z0-9._-]+)*)$", np.longhorn_mount_path))
],
[
for node in values(coalesce(np.nodes, {})) : (
node.longhorn_mount_path == null || can(regex("^/var/$|^/var/([a-zA-Z0-9._-]+(/[a-zA-Z0-9._-]+)*)$", node.longhorn_mount_path))
)
]
)
]))
error_message = "Each longhorn_mount_path must start with '/var/', be a valid absolute path, and not end with a slash (except for '/var/'). This applies to both nodepool-level and node-level settings."
}

}

variable "cluster_autoscaler_image" {
Expand Down