node storage without longhorn #465
-
Is it possible to store a PVC on node storage (for a database) without using longhorn? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
Sure, you can use hostPath volumes and you can even use something like a host path provisioner to help ease the pain of creating such mounts. You'll generally need to pair hostPath volumes with a nodeSelector though as any data stored in hostPaths won't be accessible to other nodes if you have more than one instance. I'm not sure if that provisioner handles the nodeselection/taints automatically, but it's something to be aware of. Alternatively, you can provision hetzner volumes for this task and use the hetzner CSI to automatically provision your requests and attach the volume to the server where the pod intends to mount the data. These volumes give you a bit more flexibility in that they can be attached to other nodes if necessary. Thus your data becomes a bit less locked to a specific individual server, though the performance of these volumes is significantly worse than the local NVMe storage. These volumes can only be attached to a single server though and can only be mounted RWO. If using it with a deployment, you'll have to use the "Recreate" rather than rolling strategy for rolling out new pods as your rollouts will just be stuck waiting to bind to a volume that will never be released. There will be ~1 minute of downtime as the volume/pod moves between servers. If you want an HA solution that will give you the ability to attach the pod to whatever server, whenever, wherever, use rolling strategies and rollout new deployments or swap statefulsets between nodes without (or with minimal) downtime then you need a backend which is capable of being mounted RWX. Longhorn is the "easiest" of these and it only gets more complicated and convoluted from there (rook ceph, openebs, etc). |
Beta Was this translation helpful? Give feedback.
Sure, you can use hostPath volumes and you can even use something like a host path provisioner to help ease the pain of creating such mounts.
You'll generally need to pair hostPath volumes with a nodeSelector though as any data stored in hostPaths won't be accessible to other nodes if you have more than one instance. I'm not sure if that provisioner handles the nodeselection/taints automatically, but it's something to be aware of.
Alternatively, you can provision hetzner volumes for this task and use the hetzner CSI to automatically provision your requests and attach the volume to the server where the pod intends to mount the data. These volumes give you a bit more flexibility in that the…