c9s starts VMs outside of Kubernetes #230
Replies: 3 comments 3 replies
-
no
afaik the cgroups stuff on the pod should still allow you to set cpu/mem limits that would apply to the stuff in the pod, incl the docker daemon and container and/or container w/ a vm inside of it in the pod. afaik kubevirt would be no different in that its just qemu in a pod anyway. |
Beta Was this translation helpful? Give feedback.
-
|
Hi @groundsada Also if you can show how is the resource issue renders itself it would be interesting to collect more feedback around it. From the scheduling perspective it "works", as the requests will be honored when we schedule the pod that runs DinD |
Beta Was this translation helpful? Give feedback.
-
|
Hi @carlmontanari and @hellt Likewise, I can run DinD but I can't let users run DinD safely in a mutli-tenant env with thousands of users, this is where KubeVirt as basically a wrapper solves it for us |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
We’re trying to deploy c9s in our dual-stack, multi-tenant Kubernetes (k8s) cluster. The concern we’re running into is that it starts VMs outside of the Kubernetes control plane by mounting /dev/kvm and /dev/tun, which means those VMs run on the host outside Kubernetes’ own scheduling and resource tracking. This effectively bypasses Kubernetes’ resource accounting (CPU/Memory) and can conflict with tenant workloads on those nodes.
Two points we’d appreciate clarification on:
Does the project have plans to support KubeVirt?
Without such support, in our environment the nodes running c9s will see CPU/Memory resources consumed by out-of-band VMs, which Kubernetes cannot account for properly. Has anyone run into similar issues or have patterns to address this?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions