Skip to content

Commit b81d587

Browse files
committed
Only k delte command rather then options
1 parent 5912569 commit b81d587

File tree

1 file changed

+6
-16
lines changed

1 file changed

+6
-16
lines changed

modules/concepts/pages/operations/pod_disruptions.adoc

Lines changed: 6 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -136,26 +136,16 @@ If you encounter this only manually deleting those pods can help out of this sit
136136
A Pod deletion (other than evictions) does *not* respect PDBs, so the Pods can be restarted anyway.
137137
All restarted Pods will get a new certificate, the stacklet should turn healthy again.
138138

139-
==== k9s
140-
If you are using `k9s` you can start it in your terminal
139+
=== Restore working state
140+
Delete pods with e.g. `kubectl``.
141141
[source, bash]
142142
----
143-
k9s
143+
kubectl delete pod -l app.kubernetes.io/name=zookeeper,app.kubernetes.io/instance=simple-zk
144+
pod "simple-zk-server-default-0" deleted
145+
pod "simple-zk-server-default-1" deleted
146+
pod "simple-zk-server-default-2" deleted
144147
----
145-
and type `0` to view all namespaces and then type e.g. `/zookeeper` and hit enter. Go with up and down to your pod and press `CTL + D` and confirm to delete the pod. Repeat with all other instances of the stuck product.
146148

147-
==== kubectl
148-
List your pods with
149-
[source, bash]
150-
----
151-
kubectl get pods -A
152-
----
153-
and copy the name of the pod you want to delete. Type
154-
[source, bash]
155-
----
156-
kubectl delete pod zookeeper-server-default-0
157-
----
158-
to delete the instance with the name `zookeeper-server-default-0`. Repeat it for all instances of your product.
159149

160150
=== Preventing this situation
161151
The best measure is to make sure that commons-operator is always running, so that it can restart the Pods before the certificates expire.

0 commit comments

Comments
 (0)