-
Notifications
You must be signed in to change notification settings - Fork 430
Description
Description
What problem are you trying to solve?
We've recently begun the migration from using ASG's (AutoScaling Groups) and CAS (Cluster AutoScaler) to Karpenter. With ASG's, as part of cost saving measures, our EKS clusters are scaled down during off hours and weekends in lower environments, and then scaled back up during office hours. This was performed by running a lambda at a scheduled time to set the min/max/desired settings of the ASG to 0. The current values of the min/max/desired settings before the update to 0 are captured and stored in ssm. For the scale up, the lambda reads this ssm parameter to set the ASG min/max/desired values. With Karpenter, this is not possible.
As a workaround, we have a lambda that will patch the cpu limit of the nodepool and set it to 0 so that no new Karpenter nodes will be provisioned. The lambda will then take care of deleting the previously provisioned Karpenter nodes. We have a mix of workloads running in the cluster with some using HPA and some not, so trying to scale down all of the deployments to remove the Karpenter provisioned nodes will not work. It has also been suggested to delete the nodepool and reapply it via a cronjob. This option will also not work since some of our clusters are in a controlled environment.
The ask here is to introduce a feature in Karpenter that will handle scaling down/up all Karpenter provisioned nodes on-demand via a flag or possibly with the update of the cpu limit, Karpenter will not provision any new nodes and will also clean up previously provisioned nodes without having to introduce additional cronjobs, lambdas, or deleting nodepools.
How important is this feature to you?
This feature is important as it will help with AWS cost savings by not having EC2 instances running during off hours and not having to add additional components (lambdas, cronjobs, etc...) to aid with scaling Karpenter provisioned instances.
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment