Replies: 2 comments 3 replies
-
@jonas-tm You could try draining the autoscaled node(s) in question, that will force all workloads to move to the agent nodes. |
Beta Was this translation helpful? Give feedback.
-
Hi @ifeulner - thanks for the implementation work on the autoscaler, it's pretty incredible. I am hopeful you can help me understand "It's important to have proper resource settings on the pods." a bit more -- the autoscaler scaled up when I increased the replicas in my deployment, but when I reduced them, it didn't scale down. A number of other pods were also scheduled on the autoscaled node at that point. At this point, the autoscaler won't automatically drain the node to push these other pods back to the "general" agent nodes, correct? Is there a strategy you use in this scenario to ensure that any additionally scheduled pods on the autoscale node are drained and moved back to the general agent nodes, or do you recommend some kind of taint to be applied to all pods except those that you want scheduled on the autoscaled node? Thanks for helping clarify! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
In my test cluster I have 2 agent-nodes and an autoscale pool with min 0
Since my agent-nodes got deleted and recreated most pods were moved to a newly started autoscale node.
Now all agent nodes are running fine again. However, the pods do not get moved to the agent-nodes back again keeping the autoscale node alive.
Is there a config I can apply to force desired behavior of always preferring agent nodes enabling the autoscale node to be scaled down?
Beta Was this translation helpful? Give feedback.
All reactions