Replies: 5 comments 1 reply
-
Hi, sorry but this sounds like an issue of your mysql deployment. |
Beta Was this translation helpful? Give feedback.
-
I don't think this is a MySQL issue, the question about volumes is platform agnostic. However, I can understand if this is not supported or recommended, if that is the case then I would like to hear it :) |
Beta Was this translation helpful? Give feedback.
-
Hey, what I mean Hetzner Volume (hetzner-csi) does not support multi attach. |
Beta Was this translation helpful? Give feedback.
-
Folks, just use the hcloud cli to detatch the volume the. reattach it to the correct node. |
Beta Was this translation helpful? Give feedback.
-
I will close this discussion because I don't think it is related to this project, but I wanted to clear something up. My question was merely asking if the maintainers would have any idea why a crashed/shutdown pod did not release their PVC. Based on this line
I think it is clear that it is unrelated to this project and therefore doesn't fit here! I will try to figure it out myself (and of course share my findings if I do!). :) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Description
I am currently debugging an issue where one node in my four-node cluster goes offline. I've narrowed the problem down to Neo4J, which is causing a memory spike. When this happens, the node grinds to a complete halt: SSH and Netdata stop working, and Kubernetes marks it as NoSchedule and NoExecute. While the node should ideally not become unavailable in the first place, this part of the system is working as expected.
The next step is that K3S attempts to terminate the previous pod and redeploy it on a different node. This is exactly the desired 'self-healing' behavior. However, although I can see that the new pod is created, it never starts on top of Hetzner Volumes due to a multi-attach error:
The old pod is marked as Terminating
As far as I know, Hetzner Volumes cannot be attached to multiple volumes. However, is there a way to free the volume from the previous terminating pod? If not, is there a reason why this is not done? Additionally, is there a way to still allow the cluster to reschedule pods and 'heal' itself when a node goes offline?
Beta Was this translation helpful? Give feedback.
All reactions