Skip to content

Commit 806518d

Browse files
committed
kubeadm: update references of "master" label/taint for 1.24
In 1.24 kubeadm will: - stop using the "master" label on CP nodes. - start tainting CP nodes with both "master" and "control-plane" taints. In 1.25 the "master" taint will be removed. Adjust references of the "master" label/taint to the above.
1 parent aef1728 commit 806518d

File tree

9 files changed

+49
-29
lines changed

9 files changed

+49
-29
lines changed

content/en/docs/reference/kubectl/cheatsheet.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ detailed config file information.
4949
kubectl config view # Show Merged kubeconfig settings.
5050

5151
# use multiple kubeconfig files at the same time and view merged config
52-
KUBECONFIG=~/.kube/config:~/.kube/kubconfig2
52+
KUBECONFIG=~/.kube/config:~/.kube/kubconfig2
5353

5454
kubectl config view
5555

@@ -58,7 +58,7 @@ kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}'
5858

5959
kubectl config view -o jsonpath='{.users[].name}' # display the first user
6060
kubectl config view -o jsonpath='{.users[*].name}' # get a list of users
61-
kubectl config get-contexts # display list of contexts
61+
kubectl config get-contexts # display list of contexts
6262
kubectl config current-context # display the current-context
6363
kubectl config use-context my-cluster-name # set the default context to my-cluster-name
6464

@@ -92,10 +92,10 @@ kubectl apply -f https://git.io/vPieo # create resource(s) from url
9292
kubectl create deployment nginx --image=nginx # start a single instance of nginx
9393

9494
# create a Job which prints "Hello World"
95-
kubectl create job hello --image=busybox -- echo "Hello World"
95+
kubectl create job hello --image=busybox -- echo "Hello World"
9696

9797
# create a CronJob that prints "Hello World" every minute
98-
kubectl create cronjob hello --image=busybox --schedule="*/1 * * * *" -- echo "Hello World"
98+
kubectl create cronjob hello --image=busybox --schedule="*/1 * * * *" -- echo "Hello World"
9999

100100
kubectl explain pods # get the documentation for pod manifests
101101

@@ -173,8 +173,8 @@ kubectl get configmap myconfig \
173173
-o jsonpath='{.data.ca\.crt}'
174174

175175
# Get all worker nodes (use a selector to exclude results that have a label
176-
# named 'node-role.kubernetes.io/master')
177-
kubectl get node --selector='!node-role.kubernetes.io/master'
176+
# named 'node-role.kubernetes.io/control-plane')
177+
kubectl get node --selector='!node-role.kubernetes.io/control-plane'
178178

179179
# Get all running pods in the namespace
180180
kubectl get pods --field-selector=status.phase=Running
@@ -226,7 +226,7 @@ for pod in $(kubectl get po --output=jsonpath={.items..metadata.name}); do echo
226226

227227
```bash
228228
kubectl set image deployment/frontend www=image:v2 # Rolling update "www" containers of "frontend" deployment, updating the image
229-
kubectl rollout history deployment/frontend # Check the history of deployments including the revision
229+
kubectl rollout history deployment/frontend # Check the history of deployments including the revision
230230
kubectl rollout undo deployment/frontend # Rollback to the previous deployment
231231
kubectl rollout undo deployment/frontend --to-revision=2 # Rollback to a specific revision
232232
kubectl rollout status -w deployment/frontend # Watch rolling update status of "frontend" deployment until completion
@@ -318,7 +318,7 @@ kubectl run nginx --image=nginx # Run pod nginx and write it
318318
kubectl attach my-pod -i # Attach to Running Container
319319
kubectl port-forward my-pod 5000:6000 # Listen on port 5000 on the local machine and forward to port 6000 on my-pod
320320
kubectl exec my-pod -- ls / # Run command in existing pod (1 container case)
321-
kubectl exec --stdin --tty my-pod -- /bin/sh # Interactive shell access to a running pod (1 container case)
321+
kubectl exec --stdin --tty my-pod -- /bin/sh # Interactive shell access to a running pod (1 container case)
322322
kubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case)
323323
kubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers
324324
kubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'

content/en/docs/reference/setup-tools/kubeadm/implementation-details.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -318,11 +318,12 @@ Please note that:
318318

319319
As soon as the control plane is available, kubeadm executes following actions:
320320

321-
- Labels the node as control-plane with `node-role.kubernetes.io/master=""`
322-
- Taints the node with `node-role.kubernetes.io/master:NoSchedule`
321+
- Labels the node as control-plane with `node-role.kubernetes.io/control-plane=""`
322+
- Taints the node with `node-role.kubernetes.io/master:NoSchedule` and `node-role.kubernetes.io/control-plane:NoSchedule`
323323

324324
Please note that:
325325

326+
1. The `node-role.kubernetes.io/master` taint is deprecated and will be removed in kubeadm version 1.25
326327
1. Mark control-plane phase phase can be invoked individually with the [`kubeadm init phase mark-control-plane`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-mark-control-plane) command
327328

328329
### Configure TLS-Bootstrapping for node joining

content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ By default the certs and encryption key expire after two hours.
105105

106106
## kubeadm init phase mark-control-plane {#cmd-phase-mark-control-plane}
107107

108-
Use the following phase to label and taint the node with the `node-role.kubernetes.io/master=""` key-value pair.
108+
Use the following phase to label and taint the node as a control plane node.
109109

110110
{{< tabs name="tab-mark-control-plane" >}}
111111
{{< tab name="mark-control-plane" include="generated/kubeadm_init_phase_mark-control-plane.md" />}}

content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md

Lines changed: 13 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -285,26 +285,30 @@ for `kubeadm`.
285285

286286
### Control plane node isolation
287287

288-
By default, your cluster will not schedule Pods on the control-plane node for security
289-
reasons. If you want to be able to schedule Pods on the control-plane node, for example for a
290-
single-machine Kubernetes cluster for development, run:
288+
By default, your cluster will not schedule Pods on the control plane nodes for security
289+
reasons. If you want to be able to schedule Pods on the control plane nodes,
290+
for example for a single machine Kubernetes cluster, run:
291291

292292
```bash
293-
kubectl taint nodes --all node-role.kubernetes.io/master-
293+
kubectl taint nodes --all node-role.kubernetes.io/control-plane- node-role.kubernetes.io/control-master-
294294
```
295295

296-
With output looking something like:
296+
The output will look something like:
297297

298298
```
299299
node "test-01" untainted
300-
taint "node-role.kubernetes.io/master:" not found
301-
taint "node-role.kubernetes.io/master:" not found
300+
...
302301
```
303302

304-
This will remove the `node-role.kubernetes.io/master` taint from any nodes that
305-
have it, including the control-plane node, meaning that the scheduler will then be able
303+
This will remove the `node-role.kubernetes.io/control-plane` and
304+
`node-role.kubernetes.io/master` taints from any nodes that have them,
305+
including the control plane nodes, meaning that the scheduler will then be able
306306
to schedule Pods everywhere.
307307

308+
{{< note >}}
309+
The `node-role.kubernetes.io/master` taint is deprecated and kubeadm will stop using it in version 1.25.
310+
{{< /note >}}
311+
308312
### Joining your nodes {#join-nodes}
309313

310314
The nodes are where your workloads (containers and Pods, etc) run. To add new nodes to your cluster do the following for each machine:

content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -351,7 +351,7 @@ A known solution is to patch the kube-proxy DaemonSet to allow scheduling it on
351351
nodes regardless of their conditions, keeping it off of other nodes until their initial guarding
352352
conditions abate:
353353
```
354-
kubectl -n kube-system patch ds kube-proxy -p='{ "spec": { "template": { "spec": { "tolerations": [ { "key": "CriticalAddonsOnly", "operator": "Exists" }, { "effect": "NoSchedule", "key": "node-role.kubernetes.io/master" } ] } } } }'
354+
kubectl -n kube-system patch ds kube-proxy -p='{ "spec": { "template": { "spec": { "tolerations": [ { "key": "CriticalAddonsOnly", "operator": "Exists" }, { "effect": "NoSchedule", "key": "node-role.kubernetes.io/master" }, { "effect": "NoSchedule", "key": "node-role.kubernetes.io/control-plane" } ] } } } }'
355355
```
356356

357357
The tracking issue for this problem is [here](https://github.com/kubernetes/kubeadm/issues/1027).

content/en/examples/admin/cloud/ccm-example.yaml

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -59,9 +59,13 @@ spec:
5959
- key: node.cloudprovider.kubernetes.io/uninitialized
6060
value: "true"
6161
effect: NoSchedule
62-
# this is to have the daemonset runnable on master nodes
63-
# the taint may vary depending on your cluster setup
62+
# these tolerations are to have the daemonset runnable on control plane nodes
63+
# remove them if your control plane nodes should not run pods
64+
- key: node-role.kubernetes.io/control-plane
65+
operator: Exists
66+
effect: NoSchedule
6467
- key: node-role.kubernetes.io/master
68+
operator: Exists
6569
effect: NoSchedule
6670
# this is to restrict CCM to only run on master nodes
6771
# the node selector may vary depending on your cluster setup

content/en/examples/controllers/daemonset.yaml

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,11 @@ spec:
1515
name: fluentd-elasticsearch
1616
spec:
1717
tolerations:
18-
# this toleration is to have the daemonset runnable on master nodes
19-
# remove it if your masters can't run pods
18+
# these tolerations are to have the daemonset runnable on control plane nodes
19+
# remove them if your control plane nodes should not run pods
20+
- key: node-role.kubernetes.io/control-plane
21+
operator: Exists
22+
effect: NoSchedule
2023
- key: node-role.kubernetes.io/master
2124
operator: Exists
2225
effect: NoSchedule

content/en/examples/controllers/fluentd-daemonset-update.yaml

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,9 +19,13 @@ spec:
1919
name: fluentd-elasticsearch
2020
spec:
2121
tolerations:
22-
# this toleration is to have the daemonset runnable on master nodes
23-
# remove it if your masters can't run pods
22+
# these tolerations are to have the daemonset runnable on control plane nodes
23+
# remove them if your control plane nodes should not run pods
24+
- key: node-role.kubernetes.io/control-plane
25+
operator: Exists
26+
effect: NoSchedule
2427
- key: node-role.kubernetes.io/master
28+
operator: Exists
2529
effect: NoSchedule
2630
containers:
2731
- name: fluentd-elasticsearch

content/en/examples/controllers/fluentd-daemonset.yaml

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,9 +19,13 @@ spec:
1919
name: fluentd-elasticsearch
2020
spec:
2121
tolerations:
22-
# this toleration is to have the daemonset runnable on master nodes
23-
# remove it if your masters can't run pods
22+
# these tolerations are to have the daemonset runnable on control plane nodes
23+
# remove them if your control plane nodes should not run pods
24+
- key: node-role.kubernetes.io/control-plane
25+
operator: Exists
26+
effect: NoSchedule
2427
- key: node-role.kubernetes.io/master
28+
operator: Exists
2529
effect: NoSchedule
2630
containers:
2731
- name: fluentd-elasticsearch

0 commit comments

Comments
 (0)