Skip to content

Commit 09903e1

Browse files
authored
Merge pull request #29076 from xinydev/update-kubectl-flag
Update kubectl deprecated flag delete-local-data to delete-emptydir-data
2 parents 2153412 + d095a14 commit 09903e1

File tree

3 files changed

+6
-6
lines changed

3 files changed

+6
-6
lines changed

content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -415,7 +415,7 @@ and make sure that the node is empty, then deconfigure the node.
415415
Talking to the control-plane node with the appropriate credentials, run:
416416

417417
```bash
418-
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
418+
kubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets
419419
```
420420

421421
Before removing the node, reset the state installed by `kubeadm`:

content/en/docs/tasks/run-application/run-replicated-stateful-application.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -379,7 +379,7 @@ This might impact other applications on the Node, so it's best to
379379
**only do this in a test cluster**.
380380

381381
```shell
382-
kubectl drain <node-name> --force --delete-local-data --ignore-daemonsets
382+
kubectl drain <node-name> --force --delete-emptydir-data --ignore-daemonsets
383383
```
384384

385385
Now you can watch as the Pod reschedules on a different Node:

content/en/docs/tutorials/stateful-application/zookeeper.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -937,7 +937,7 @@ Use [`kubectl drain`](/docs/reference/generated/kubectl/kubectl-commands/#drain)
937937
drain the node on which the `zk-0` Pod is scheduled.
938938

939939
```shell
940-
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
940+
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
941941
```
942942

943943
```
@@ -972,7 +972,7 @@ Keep watching the `StatefulSet`'s Pods in the first terminal and drain the node
972972
`zk-1` is scheduled.
973973

974974
```shell
975-
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-node-ixsl" cordoned
975+
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data "kubernetes-node-ixsl" cordoned
976976
```
977977

978978
```
@@ -1015,7 +1015,7 @@ Continue to watch the Pods of the stateful set, and drain the node on which
10151015
`zk-2` is scheduled.
10161016

10171017
```shell
1018-
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
1018+
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
10191019
```
10201020

10211021
```
@@ -1101,7 +1101,7 @@ zk-1 1/1 Running 0 13m
11011101
Attempt to drain the node on which `zk-2` is scheduled.
11021102

11031103
```shell
1104-
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
1104+
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
11051105
```
11061106

11071107
The output:

0 commit comments

Comments
 (0)