Skip to content

Commit 096a657

Browse files
aritraghoshTim Bannister
andauthored
Refactoring node status to new page (#42378)
* Refactoring node status to new page - Created a new page in architecture in node for the status - Removed the current node status from concepts and moved it there * Update content/en/docs/reference/node/_index.md Co-authored-by: Tim Bannister <[email protected]> * Update content/en/docs/reference/node/node-status.md Co-authored-by: Tim Bannister <[email protected]> * Update content/en/docs/concepts/architecture/nodes.md Co-authored-by: Tim Bannister <[email protected]> * Update node-status.md * Update node-status.md * Update content/en/docs/reference/node/node-status.md Co-authored-by: Tim Bannister <[email protected]> --------- Co-authored-by: Tim Bannister <[email protected]>
1 parent fc9493a commit 096a657

File tree

3 files changed

+147
-107
lines changed

3 files changed

+147
-107
lines changed

content/en/docs/concepts/architecture/nodes.md

Lines changed: 7 additions & 107 deletions
Original file line numberDiff line numberDiff line change
@@ -163,132 +163,32 @@ that should run on the Node even if it is being drained of workload applications
163163

164164
A Node's status contains the following information:
165165

166-
* [Addresses](#addresses)
167-
* [Conditions](#condition)
168-
* [Capacity and Allocatable](#capacity)
169-
* [Info](#info)
166+
* [Addresses](/docs/concepts/node/node-status/#addresses)
167+
* [Conditions](/docs/concepts/node/node-status/#condition)
168+
* [Capacity and Allocatable](/docs/concepts/node/node-status/#capacity)
169+
* [Info](/docs/concepts/node/node-status/#info)
170170

171171
You can use `kubectl` to view a Node's status and other details:
172172

173173
```shell
174174
kubectl describe node <insert-node-name-here>
175175
```
176176

177-
Each section of the output is described below.
177+
See [Node Status](/docs/concepts/node/node-status) for more details
178178

179-
### Addresses
180-
181-
The usage of these fields varies depending on your cloud provider or bare metal configuration.
182-
183-
* HostName: The hostname as reported by the node's kernel. Can be overridden via the kubelet
184-
`--hostname-override` parameter.
185-
* ExternalIP: Typically the IP address of the node that is externally routable (available from
186-
outside the cluster).
187-
* InternalIP: Typically the IP address of the node that is routable only within the cluster.
188-
189-
190-
### Conditions {#condition}
191-
192-
The `conditions` field describes the status of all `Running` nodes. Examples of conditions include:
193-
194-
{{< table caption = "Node conditions, and a description of when each condition applies." >}}
195-
| Node Condition | Description |
196-
|----------------------|-------------|
197-
| `Ready` | `True` if the node is healthy and ready to accept pods, `False` if the node is not healthy and is not accepting pods, and `Unknown` if the node controller has not heard from the node in the last `node-monitor-grace-period` (default is 40 seconds) |
198-
| `DiskPressure` | `True` if pressure exists on the disk size—that is, if the disk capacity is low; otherwise `False` |
199-
| `MemoryPressure` | `True` if pressure exists on the node memory—that is, if the node memory is low; otherwise `False` |
200-
| `PIDPressure` | `True` if pressure exists on the processes—that is, if there are too many processes on the node; otherwise `False` |
201-
| `NetworkUnavailable` | `True` if the network for the node is not correctly configured, otherwise `False` |
202-
{{< /table >}}
203-
204-
{{< note >}}
205-
If you use command-line tools to print details of a cordoned Node, the Condition includes
206-
`SchedulingDisabled`. `SchedulingDisabled` is not a Condition in the Kubernetes API; instead,
207-
cordoned nodes are marked Unschedulable in their spec.
208-
{{< /note >}}
209-
210-
In the Kubernetes API, a node's condition is represented as part of the `.status`
211-
of the Node resource. For example, the following JSON structure describes a healthy node:
212-
213-
```json
214-
"conditions": [
215-
{
216-
"type": "Ready",
217-
"status": "True",
218-
"reason": "KubeletReady",
219-
"message": "kubelet is posting ready status",
220-
"lastHeartbeatTime": "2019-06-05T18:38:35Z",
221-
"lastTransitionTime": "2019-06-05T11:41:27Z"
222-
}
223-
]
224-
```
225-
226-
When problems occur on nodes, the Kubernetes control plane automatically creates
227-
[taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) that match the conditions
228-
affecting the node. An example of this is when the `status` of the Ready condition
229-
remains `Unknown` or `False` for longer than the kube-controller-manager's `NodeMonitorGracePeriod`,
230-
which defaults to 40 seconds. This will cause either an `node.kubernetes.io/unreachable` taint, for an `Unknown` status,
231-
or a `node.kubernetes.io/not-ready` taint, for a `False` status, to be added to the Node.
232-
233-
These taints affect pending pods as the scheduler takes the Node's taints into consideration when
234-
assigning a pod to a Node. Existing pods scheduled to the node may be evicted due to the application
235-
of `NoExecute` taints. Pods may also have {{< glossary_tooltip text="tolerations" term_id="toleration" >}} that let
236-
them schedule to and continue running on a Node even though it has a specific taint.
237-
238-
See [Taint Based Evictions](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions) and
239-
[Taint Nodes by Condition](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)
240-
for more details.
241-
242-
### Capacity and Allocatable {#capacity}
243-
244-
Describes the resources available on the node: CPU, memory, and the maximum
245-
number of pods that can be scheduled onto the node.
246-
247-
The fields in the capacity block indicate the total amount of resources that a
248-
Node has. The allocatable block indicates the amount of resources on a
249-
Node that is available to be consumed by normal Pods.
250-
251-
You may read more about capacity and allocatable resources while learning how
252-
to [reserve compute resources](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)
253-
on a Node.
254-
255-
### Info
256-
257-
Describes general information about the node, such as kernel version, Kubernetes
258-
version (kubelet and kube-proxy version), container runtime details, and which
259-
operating system the node uses.
260-
The kubelet gathers this information from the node and publishes it into
261-
the Kubernetes API.
262-
263-
## Heartbeats
179+
## Node heartbeats
264180

265181
Heartbeats, sent by Kubernetes nodes, help your cluster determine the
266182
availability of each node, and to take action when failures are detected.
267183

268184
For nodes there are two forms of heartbeats:
269185

270-
* updates to the `.status` of a Node
186+
* Updates to the [`.status`](/docs/concepts/node/node-status/) of a Node
271187
* [Lease](/docs/concepts/architecture/leases/) objects
272188
within the `kube-node-lease`
273189
{{< glossary_tooltip term_id="namespace" text="namespace">}}.
274190
Each Node has an associated Lease object.
275191

276-
Compared to updates to `.status` of a Node, a Lease is a lightweight resource.
277-
Using Leases for heartbeats reduces the performance impact of these updates
278-
for large clusters.
279-
280-
The kubelet is responsible for creating and updating the `.status` of Nodes,
281-
and for updating their related Leases.
282-
283-
- The kubelet updates the node's `.status` either when there is change in status
284-
or if there has been no update for a configured interval. The default interval
285-
for `.status` updates to Nodes is 5 minutes, which is much longer than the 40
286-
second default timeout for unreachable nodes.
287-
- The kubelet creates and then updates its Lease object every 10 seconds
288-
(the default update interval). Lease updates occur independently from
289-
updates to the Node's `.status`. If the Lease update fails, the kubelet retries,
290-
using exponential backoff that starts at 200 milliseconds and capped at 7 seconds.
291-
292192
## Node controller
293193

294194
The node {{< glossary_tooltip text="controller" term_id="controller" >}} is a

content/en/docs/reference/node/_index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ This section contains the following reference topics about nodes:
99
* the kubelet's [checkpoint API](/docs/reference/node/kubelet-checkpoint-api/)
1010
* a list of [Articles on dockershim Removal and on Using CRI-compatible Runtimes](/docs/reference/node/topics-on-dockershim-and-cri-compatible-runtimes/)
1111

12+
* [Node `.status` information](/docs/reference/node/node-status/)
13+
1214
You can also read node reference details from elsewhere in the
1315
Kubernetes documentation, including:
1416

Lines changed: 138 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,138 @@
1+
---
2+
content_type: reference
3+
title: Node Status
4+
weight: 80
5+
---
6+
<!-- overview -->
7+
8+
The status of a [node](/docs/concepts/architecture/nodes/) in Kubernetes a critical aspect of managing a Kubernetes cluster. In this article, we'll cover the basics of monitoring and maintaining node status to ensure a healthy and stable cluster
9+
10+
## Node status fields
11+
12+
A Node's status contains the following information:
13+
14+
* [Addresses](#addresses)
15+
* [Conditions](#condition)
16+
* [Capacity and Allocatable](#capacity)
17+
* [Info](#info)
18+
19+
You can use `kubectl` to view a Node's status and other details:
20+
21+
```shell
22+
kubectl describe node <insert-node-name-here>
23+
```
24+
25+
Each section of the output is described below.
26+
27+
## Addresses
28+
29+
The usage of these fields varies depending on your cloud provider or bare metal configuration.
30+
31+
* HostName: The hostname as reported by the node's kernel. Can be overridden via the kubelet
32+
`--hostname-override` parameter.
33+
* ExternalIP: Typically the IP address of the node that is externally routable (available from
34+
outside the cluster).
35+
* InternalIP: Typically the IP address of the node that is routable only within the cluster.
36+
37+
38+
## Conditions {#condition}
39+
40+
The `conditions` field describes the status of all `Running` nodes. Examples of conditions include:
41+
42+
{{< table caption = "Node conditions, and a description of when each condition applies." >}}
43+
| Node Condition | Description |
44+
|----------------------|-------------|
45+
| `Ready` | `True` if the node is healthy and ready to accept pods, `False` if the node is not healthy and is not accepting pods, and `Unknown` if the node controller has not heard from the node in the last `node-monitor-grace-period` (default is 40 seconds) |
46+
| `DiskPressure` | `True` if pressure exists on the disk size—that is, if the disk capacity is low; otherwise `False` |
47+
| `MemoryPressure` | `True` if pressure exists on the node memory—that is, if the node memory is low; otherwise `False` |
48+
| `PIDPressure` | `True` if pressure exists on the processes—that is, if there are too many processes on the node; otherwise `False` |
49+
| `NetworkUnavailable` | `True` if the network for the node is not correctly configured, otherwise `False` |
50+
{{< /table >}}
51+
52+
{{< note >}}
53+
If you use command-line tools to print details of a cordoned Node, the Condition includes
54+
`SchedulingDisabled`. `SchedulingDisabled` is not a Condition in the Kubernetes API; instead,
55+
cordoned nodes are marked Unschedulable in their spec.
56+
{{< /note >}}
57+
58+
In the Kubernetes API, a node's condition is represented as part of the `.status`
59+
of the Node resource. For example, the following JSON structure describes a healthy node:
60+
61+
```json
62+
"conditions": [
63+
{
64+
"type": "Ready",
65+
"status": "True",
66+
"reason": "KubeletReady",
67+
"message": "kubelet is posting ready status",
68+
"lastHeartbeatTime": "2019-06-05T18:38:35Z",
69+
"lastTransitionTime": "2019-06-05T11:41:27Z"
70+
}
71+
]
72+
```
73+
74+
When problems occur on nodes, the Kubernetes control plane automatically creates
75+
[taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) that match the conditions
76+
affecting the node. An example of this is when the `status` of the Ready condition
77+
remains `Unknown` or `False` for longer than the kube-controller-manager's `NodeMonitorGracePeriod`,
78+
which defaults to 40 seconds. This will cause either an `node.kubernetes.io/unreachable` taint, for an `Unknown` status,
79+
or a `node.kubernetes.io/not-ready` taint, for a `False` status, to be added to the Node.
80+
81+
These taints affect pending pods as the scheduler takes the Node's taints into consideration when
82+
assigning a pod to a Node. Existing pods scheduled to the node may be evicted due to the application
83+
of `NoExecute` taints. Pods may also have {{< glossary_tooltip text="tolerations" term_id="toleration" >}} that let
84+
them schedule to and continue running on a Node even though it has a specific taint.
85+
86+
See [Taint Based Evictions](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions) and
87+
[Taint Nodes by Condition](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)
88+
for more details.
89+
90+
## Capacity and Allocatable {#capacity}
91+
92+
Describes the resources available on the node: CPU, memory, and the maximum
93+
number of pods that can be scheduled onto the node.
94+
95+
The fields in the capacity block indicate the total amount of resources that a
96+
Node has. The allocatable block indicates the amount of resources on a
97+
Node that is available to be consumed by normal Pods.
98+
99+
You may read more about capacity and allocatable resources while learning how
100+
to [reserve compute resources](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)
101+
on a Node.
102+
103+
## Info
104+
105+
Describes general information about the node, such as kernel version, Kubernetes
106+
version (kubelet and kube-proxy version), container runtime details, and which
107+
operating system the node uses.
108+
The kubelet gathers this information from the node and publishes it into
109+
the Kubernetes API.
110+
111+
## Heartbeats
112+
113+
Heartbeats, sent by Kubernetes nodes, help your cluster determine the
114+
availability of each node, and to take action when failures are detected.
115+
116+
For nodes there are two forms of heartbeats:
117+
118+
* updates to the `.status` of a Node
119+
* [Lease](/docs/concepts/architecture/leases/) objects
120+
within the `kube-node-lease`
121+
{{< glossary_tooltip term_id="namespace" text="namespace">}}.
122+
Each Node has an associated Lease object.
123+
124+
Compared to updates to `.status` of a Node, a Lease is a lightweight resource.
125+
Using Leases for heartbeats reduces the performance impact of these updates
126+
for large clusters.
127+
128+
The kubelet is responsible for creating and updating the `.status` of Nodes,
129+
and for updating their related Leases.
130+
131+
- The kubelet updates the node's `.status` either when there is change in status
132+
or if there has been no update for a configured interval. The default interval
133+
for `.status` updates to Nodes is 5 minutes, which is much longer than the 40
134+
second default timeout for unreachable nodes.
135+
- The kubelet creates and then updates its Lease object every 10 seconds
136+
(the default update interval). Lease updates occur independently from
137+
updates to the Node's `.status`. If the Lease update fails, the kubelet retries,
138+
using exponential backoff that starts at 200 milliseconds and capped at 7 seconds.

0 commit comments

Comments
 (0)