Autoscaling doesn't work #1266
-
DescriptionHello, I’ve been struggling all day, I just can’t start autoscaling. Set node group draining-node-pool size from 0 to 0, expected delta 0
Kube.tf filelocals {
hcloud_token = ""
}
module "kube-hetzner" {
providers = {
hcloud = hcloud
}
hcloud_token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token
source = "kube-hetzner/kube-hetzner/hcloud"
cluster_name = "prod"
automatically_upgrade_os = false
automatically_upgrade_k3s = false
ssh_public_key = file("")
ssh_private_key = file("")
network_region = "eu-central"
enable_rancher = true
rancher_hostname = ""
rancher_install_channel = "stable"
rancher_bootstrap_password = ""
initial_k3s_channel = "v1.25"
control_plane_nodepools = [
{
name = "control-plane-hel1",
server_type = "cpx31",
location = "hel1",
labels = [],
taints = [],
count = 1
backups = true
}
]
agent_nodepools = [
{
name = "agent-heavy",
server_type = "cpx31",
location = "hel1",
labels = [],
taints = [],
count = 1
backups = true
}
]
autoscaler_nodepools = [
{
name = "autoscaled-peak-load"
server_type = "cpx21" // Choose a server type with sufficient resources
location = "hel1" // Specify the desired location
min_nodes = 0 // Set a minimum number of nodes
max_nodes = 5 // Set a maximum number to scale up to during peak load
#labels = {
# "node.kubernetes.io/role": "peak-workloads"
#}
#taints = [{
# key: "node.kubernetes.io/role"
# value: "peak-workloads"
# effect: "NoExecute"
#}]
}
]
#load_balancer_type = "lb11"
#load_balancer_location = "hel1"
cluster_autoscaler_image = "registry.k8s.io/autoscaling/cluster-autoscaler"
cluster_autoscaler_version = "v1.29.0"
cluster_autoscaler_log_level = 5
cluster_autoscaler_log_to_stderr = true
cluster_autoscaler_stderr_threshold = "INFO"
#cluster_autoscaler_extra_args = [
# "--ignore-daemonsets-utilization=true",
# "--enforce-node-group-min-size=true",
#]
cni_plugin = "cilium"
cilium_version = "v1.14.0"
cilium_routing_mode = "native"
cilium_egress_gateway_enabled = true
}
provider "hcloud" {
token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token
}
terraform {
required_version = ">= 1.5.0"
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
version = ">= 1.43.0"
}
}
}
output "kubeconfig" {
value = module.kube-hetzner.kubeconfig
sensitive = true
}
variable "hcloud_token" {
sensitive = true
default = ""
} ScreenshotsNo response PlatformLinux |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 7 replies
-
@homergleason Remove those, we maintain a fork pending fixes related to Hetzner are shipped in a future version. So just delete those lines to use our own fork: cluster_autoscaler_image = "registry.k8s.io/autoscaling/cluster-autoscaler"
cluster_autoscaler_version = "v1.29.0" Then just make sure to give it enough load, with such powerful nodes, you may need to give way bigger ressource requests than in the example below: apiVersion: apps/v1
kind: Deployment
metadata:
name: force-scale-up
spec:
replicas: 1
selector:
matchLabels:
app: force-scale-up
template:
metadata:
labels:
app: force-scale-up
spec:
containers:
- name: busybox
image: busybox
args:
- /bin/sh
- -c
- "while true; do echo 'Forcing scale up...'; sleep 60; done"
resources:
requests:
cpu: 2000m # Requesting a high amount of CPU to force scale up
memory: 4Gi # Requesting a high amount of memory to force scale up |
Beta Was this translation helpful? Give feedback.
-
Hi there,
This may be related to this part of the template? |
Beta Was this translation helpful? Give feedback.
Should be fixed in v2.13.3.