You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have an issue with my k8s cluster using DigitalOcean Managed Kubernetes (DOKS) with Cluster Autoscaler enabled and also enable HA mode for Linkerd which is not scaled down after the server resources is back to normal/low in used to be before installing the Linkerd.
Here are some statuses from my cluster:
$ linkerd version
Client version: stable-2.11.1
Server version: stable-2.11.1
$ linkerd check
Linkerd core checks
===================
kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API
kubernetes-version
------------------
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version
linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ control plane pods are ready
linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
linkerd-identity
----------------
√ certificate config is valid
√ trust anchors are using supported crypto algorithm
√ trust anchors are within their validity period
√ trust anchors are valid for at least 60 days
√ issuer cert is using supported crypto algorithm
√ issuer cert is within its validity period
√ issuer cert is valid for at least 60 days
√ issuer cert is issued by the trust anchor
linkerd-webhooks-and-apisvc-tls
-------------------------------
√ proxy-injector webhook has valid cert
√ proxy-injector cert is valid for at least 60 days
√ sp-validator webhook has valid cert
√ sp-validator cert is valid for at least 60 days
√ policy-validator webhook has valid cert
√ policy-validator cert is valid for at least 60 days
linkerd-version
---------------
√ can determine the latest version
√ cli is up-to-date
control-plane-version
---------------------
√ can retrieve the control plane version
√ control plane is up-to-date
√ control plane and cli versions match
linkerd-control-plane-proxy
---------------------------
√ control plane proxies are healthy
√ control plane proxies are up-to-date
√ control plane proxies and cli versions match
linkerd-ha-checks
-----------------
√ pod injection disabled on kube-system
√ multiple replicas of control plane pods
Status check results are √
Linkerd extensions checks
=========================
linkerd-viz
-----------
√ linkerd-viz Namespace exists
√ linkerd-viz ClusterRoles exist
√ linkerd-viz ClusterRoleBindings exist
√ tap API server has valid cert
√ tap API server cert is valid for at least 60 days
√ tap API service is running
√ linkerd-viz pods are injected
√ viz extension pods are running
√ viz extension proxies are healthy
√ viz extension proxies are up-to-date
√ viz extension proxies and cli versions match
√ prometheus is installed and configured correctly
√ can initialize the client
√ viz extension self-check
Status check results are √
$ kubectl get configmap cluster-autoscaler-status -o yaml -n kube-system
apiVersion: v1
data:
status: |+
Cluster-autoscaler status at 2022-03-18 01:31:23.706010744 +0000 UTC:
Cluster-wide:
Health: Healthy (ready=5 unready=0 notStarted=0 longNotStarted=0 registered=5 longUnregistered=0)
LastProbeTime: 2022-03-18 01:31:23.257276343 +0000 UTC m=+3458255.760704157
LastTransitionTime: 2022-02-06 00:54:49.293218172 +0000 UTC m=+61.796645863
ScaleUp: NoActivity (ready=5 registered=5)
LastProbeTime: 2022-03-18 01:31:23.257276343 +0000 UTC m=+3458255.760704157
LastTransitionTime: 2022-03-17 14:29:28.436065092 +0000 UTC m=+3418540.939492833
ScaleDown: NoCandidates (candidates=0)
LastProbeTime: 2022-03-18 01:31:23.257276343 +0000 UTC m=+3458255.760704157
LastTransitionTime: 2022-03-16 04:27:32.351983709 +0000 UTC m=+3296024.855411720
NodeGroups:
Name: f8504275-4a95-4391-8c83-3fb5ad1994b0
Health: Healthy (ready=5 unready=0 notStarted=0 longNotStarted=0 registered=5 longUnregistered=0 cloudProviderTarget=5 (minSize=3, maxSize=5))
LastProbeTime: 2022-03-18 01:31:23.257276343 +0000 UTC m=+3458255.760704157
LastTransitionTime: 2022-02-06 00:54:49.293218172 +0000 UTC m=+61.796645863
ScaleUp: NoActivity (ready=5 cloudProviderTarget=5)
LastProbeTime: 2022-03-18 01:31:23.257276343 +0000 UTC m=+3458255.760704157
LastTransitionTime: 2022-03-17 14:29:28.436065092 +0000 UTC m=+3418540.939492833
ScaleDown: NoCandidates (candidates=0)
LastProbeTime: 2022-03-18 01:31:23.257276343 +0000 UTC m=+3458255.760704157
LastTransitionTime: 2022-03-16 04:27:32.351983709 +0000 UTC m=+3296024.855411720
--skip-nodes-with-local-storage=false is unable to apply due to the Cluster AutoScaller being fully managed by DO and we don't have such access to pass any option through.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I have an issue with my k8s cluster using DigitalOcean Managed Kubernetes (DOKS) with Cluster Autoscaler enabled and also enable HA mode for Linkerd which is not scaled down after the server resources is back to normal/low in used to be before installing the Linkerd.
Here are some statuses from my cluster:
I have followed the HA instructions too:
cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
--skip-nodes-with-local-storage=false
is unable to apply due to the Cluster AutoScaller being fully managed by DO and we don't have such access to pass any option through.Any idea what is going wrong here? Thanks.
Beta Was this translation helpful? Give feedback.
All reactions