Skip to content

worker node Scale Down Problem In kubernetes #10233

@ahmadamirahmadi007

Description

@ahmadamirahmadi007

problem

i can not scale down my worker node in the kubernetes cluster. when I check the cloudstack log I found the following error:

2025-01-22 07:53:20,644 ERROR [c.c.u.s.SshHelper] (API-Job-Executor-125:ctx-6dc068a9 job-18918 ctx-e219b42e) (logid:e7efb240) SSH execution of command sudo /opt/bin/kubectl drain test2-node-1948cfb170b --ignore-daemonsets --delete-local-data has an error status code in return. Result output: error: unknown flag: --delete-local-data
2025-01-22 07:53:20,645 WARN [c.c.k.c.a.KubernetesClusterActionWorker] (API-Job-Executor-125:ctx-6dc068a9 job-18918 ctx-e219b42e) (logid:e7efb240) Draining node: test2-node-1948cfb170b on VM : test2-node-1948cfb170b in Kubernetes cluster : test2 unsuccessful

versions

CloudStack 4.19.1.3
kubernetes version 1.31.1
kubectl verion v1.31.1

The steps to reproduce the bug

form Scale Kubernetes cluster in cluster part menu i scale down worker node from 2 to 1

What to do about it?

No response

Metadata

Metadata

Assignees

No one assigned

    Type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions