Skip to content

Commit 2e4dbda

Browse files
Sebbo92dchaykin
authored andcommitted
Labelling the node
1 parent d13da6a commit 2e4dbda

File tree

1 file changed

+4
-3
lines changed

1 file changed

+4
-3
lines changed

README.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -109,11 +109,12 @@ to observe the current status the following command can be executed:
109109
kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller --timeout=120s
110110
```
111111

112-
#### Labelling the worker node
112+
#### Labelling the node
113113

114-
We suggest labelling a worker node where the pod is going to be deployed. By default, a pod is deployed on one of several worker nodes you might have in the kind cluster. To make it work, the docker image must be populated on all worker nodes in the cluster (which takes time). Otherwise, you can get into a situation, in which the pod has started on a node where the docker image is missing. Let's work with a dedicated node and safe the time.
114+
We know that by default a kubernetes cluster will deploy a pod on a node which has enough ressources for this workload. Our docker image must be pulled on all nodes in our kubernetes cluster in order to be ready as quickly as possible. This process may take a long time. If the docker image isn't pulled on a node and a new pod will provisioned on this node, it will take more time to get ready and healthy.
115+
For our use case we will label a node in our kubernetes cluster so that always this node will be used.
115116

116-
So, we label a worker node with _debug=true_:
117+
We label a node with _debug=true_:
117118

118119
```sh
119120
kubectl label nodes local-debug-k8s-worker debug=true

0 commit comments

Comments
 (0)