-
Notifications
You must be signed in to change notification settings - Fork 598
Description
Is your feature request related to a problem?/Why is this needed
We use this driver on our EKS cluster. The cluster has a wide range of instance types, ranging from r6a.large to r6a.32xlarge instances. We have noticed that the daemonset pods on the larger instances are running out of memory when there are a large number of pods using EFS on the node. However, we currently can't increase the resources allocated to the daemonset on these instance types without also increasing the resources requested on the smaller nodes, which would result in a significant part of those nodes being dedicated to these daemonset pods.
As an alternative, we would like to be able configure different resource requests for the daemonset across the cluster. In our case, we would like to do it by instance type, though I guess others might want to do it a different way.
Describe the solution you'd like in detail
The simplest approach I could think of achieving this goal was for it to be possible to configure the Helm chart such that multiple instances of it can be installed into the cluster at the same time.
Something like the following:
- First Helm chart install would be used for installing the controller, and necessary roles and bindings etc.
- Subsequent chart installs would install just the daemonset, with the daemonset configure to target a specific set of instances, e.g. through a node selector.
We tried this and it almost worked. However, we met the following issues:
- The name of the daemonset in the Helm chart is hard-coded, so multiple chart installs (into the same namespace) results in a name clash.
- While it is possible to configure the Helm chart so as not to create the service account, some other resources are always made. In particular, the role bindings and cluster roles.
These changes seem fairly simple and would allow people to install multiple copies of the Helm chart (and thus achieve the aim of this issue).
Describe alternatives you've considered
I've not considered alternative options at present.