Skip to content

Commit 672ab40

Browse files
authored
Merge pull request #32260 from tengqm/fix-links-4
Fix links in the nodes page
2 parents bef043a + e3bace5 commit 672ab40

File tree

1 file changed

+47
-29
lines changed
  • content/en/docs/concepts/architecture

1 file changed

+47
-29
lines changed

content/en/docs/concepts/architecture/nodes.md

Lines changed: 47 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -33,9 +33,9 @@ There are two main ways to have Nodes added to the {{< glossary_tooltip text="AP
3333
1. The kubelet on a node self-registers to the control plane
3434
2. You (or another human user) manually add a Node object
3535

36-
After you create a Node {{< glossary_tooltip text="object" term_id="object" >}}, or the kubelet on a node self-registers, the
37-
control plane checks whether the new Node object is valid. For example, if you
38-
try to create a Node from the following JSON manifest:
36+
After you create a Node {{< glossary_tooltip text="object" term_id="object" >}},
37+
or the kubelet on a node self-registers, the control plane checks whether the new Node object is
38+
valid. For example, if you try to create a Node from the following JSON manifest:
3939

4040
```json
4141
{
@@ -85,19 +85,23 @@ register itself with the API server. This is the preferred pattern, used by mos
8585

8686
For self-registration, the kubelet is started with the following options:
8787

88-
- `--kubeconfig` - Path to credentials to authenticate itself to the API server.
89-
- `--cloud-provider` - How to talk to a {{< glossary_tooltip text="cloud provider" term_id="cloud-provider" >}} to read metadata about itself.
90-
- `--register-node` - Automatically register with the API server.
91-
- `--register-with-taints` - Register the node with the given list of {{< glossary_tooltip text="taints" term_id="taint" >}} (comma separated `<key>=<value>:<effect>`).
88+
- `--kubeconfig` - Path to credentials to authenticate itself to the API server.
89+
- `--cloud-provider` - How to talk to a {{< glossary_tooltip text="cloud provider" term_id="cloud-provider" >}}
90+
to read metadata about itself.
91+
- `--register-node` - Automatically register with the API server.
92+
- `--register-with-taints` - Register the node with the given list of
93+
{{< glossary_tooltip text="taints" term_id="taint" >}} (comma separated `<key>=<value>:<effect>`).
9294

93-
No-op if `register-node` is false.
94-
- `--node-ip` - IP address of the node.
95-
- `--node-labels` - {{< glossary_tooltip text="Labels" term_id="label" >}} to add when registering the node in the cluster (see label restrictions enforced by the [NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)).
96-
- `--node-status-update-frequency` - Specifies how often kubelet posts its node status to the API server.
95+
No-op if `register-node` is false.
96+
- `--node-ip` - IP address of the node.
97+
- `--node-labels` - {{< glossary_tooltip text="Labels" term_id="label" >}} to add when registering the node
98+
in the cluster (see label restrictions enforced by the
99+
[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)).
100+
- `--node-status-update-frequency` - Specifies how often kubelet posts its node status to the API server.
97101

98102
When the [Node authorization mode](/docs/reference/access-authn-authz/node/) and
99-
[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) are enabled,
100-
kubelets are only authorized to create/modify their own Node resource.
103+
[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
104+
are enabled, kubelets are only authorized to create/modify their own Node resource.
101105

102106
{{< note >}}
103107
As mentioned in the [Node name uniqueness](#node-name-uniqueness) section,
@@ -168,8 +172,10 @@ Each section of the output is described below.
168172

169173
The usage of these fields varies depending on your cloud provider or bare metal configuration.
170174

171-
* HostName: The hostname as reported by the node's kernel. Can be overridden via the kubelet `--hostname-override` parameter.
172-
* ExternalIP: Typically the IP address of the node that is externally routable (available from outside the cluster).
175+
* HostName: The hostname as reported by the node's kernel. Can be overridden via the kubelet
176+
`--hostname-override` parameter.
177+
* ExternalIP: Typically the IP address of the node that is externally routable (available from
178+
outside the cluster).
173179
* InternalIP: Typically the IP address of the node that is routable only within the cluster.
174180

175181

@@ -289,7 +295,6 @@ and for updating their related Leases.
289295
updates to the Node's `.status`. If the Lease update fails, the kubelet retries,
290296
using exponential backoff that starts at 200 milliseconds and capped at 7 seconds.
291297

292-
293298
## Node controller
294299

295300
The node {{< glossary_tooltip text="controller" term_id="controller" >}} is a
@@ -306,6 +311,7 @@ controller deletes the node from its list of nodes.
306311

307312
The third is monitoring the nodes' health. The node controller is
308313
responsible for:
314+
309315
- In the case that a node becomes unreachable, updating the NodeReady condition
310316
of within the Node's `.status`. In this case the node controller sets the
311317
NodeReady condition to `ConditionUnknown`.
@@ -327,6 +333,7 @@ The node eviction behavior changes when a node in a given availability zone
327333
becomes unhealthy. The node controller checks what percentage of nodes in the zone
328334
are unhealthy (NodeReady condition is `ConditionUnknown` or `ConditionFalse`) at
329335
the same time:
336+
330337
- If the fraction of unhealthy nodes is at least `--unhealthy-zone-threshold`
331338
(default 0.55), then the eviction rate is reduced.
332339
- If the cluster is small (i.e. has less than or equal to
@@ -391,7 +398,9 @@ for more information.
391398

392399
The kubelet attempts to detect node system shutdown and terminates pods running on the node.
393400

394-
Kubelet ensures that pods follow the normal [pod termination process](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) during the node shutdown.
401+
Kubelet ensures that pods follow the normal
402+
[pod termination process](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)
403+
during the node shutdown.
395404

396405
The Graceful node shutdown feature depends on systemd since it takes advantage of
397406
[systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to
@@ -404,18 +413,26 @@ enabled by default in 1.21.
404413
Note that by default, both configuration options described below,
405414
`shutdownGracePeriod` and `shutdownGracePeriodCriticalPods` are set to zero,
406415
thus not activating Graceful node shutdown functionality.
407-
To activate the feature, the two kubelet config settings should be configured appropriately and set to non-zero values.
416+
To activate the feature, the two kubelet config settings should be configured appropriately and
417+
set to non-zero values.
408418

409419
During a graceful shutdown, kubelet terminates pods in two phases:
410420

411421
1. Terminate regular pods running on the node.
412-
2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) running on the node.
422+
2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)
423+
running on the node.
424+
425+
Graceful node shutdown feature is configured with two
426+
[`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/) options:
413427

414-
Graceful node shutdown feature is configured with two [`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/) options:
415428
* `shutdownGracePeriod`:
416-
* Specifies the total duration that the node should delay the shutdown by. This is the total grace period for pod termination for both regular and [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
429+
* Specifies the total duration that the node should delay the shutdown by. This is the total
430+
grace period for pod termination for both regular and
431+
[critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
417432
* `shutdownGracePeriodCriticalPods`:
418-
* Specifies the duration used to terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) during a node shutdown. This value should be less than `shutdownGracePeriod`.
433+
* Specifies the duration used to terminate
434+
[critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)
435+
during a node shutdown. This value should be less than `shutdownGracePeriod`.
419436

420437
For example, if `shutdownGracePeriod=30s`, and
421438
`shutdownGracePeriodCriticalPods=10s`, kubelet will delay the node shutdown by
@@ -443,8 +460,8 @@ To provide more flexibility during graceful node shutdown around the ordering
443460
of pods during shutdown, graceful node shutdown honors the PriorityClass for
444461
Pods, provided that you enabled this feature in your cluster. The feature
445462
allows cluster administers to explicitly define the ordering of pods
446-
during graceful node shutdown based on [priority
447-
classes](docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass).
463+
during graceful node shutdown based on
464+
[priority classes](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass).
448465

449466
The [Graceful Node Shutdown](#graceful-node-shutdown) feature, as described
450467
above, shuts down pods in two phases, non-critical pods, followed by critical
@@ -457,8 +474,8 @@ graceful node shutdown in multiple phases, each phase shutting down a
457474
particular priority class of pods. The kubelet can be configured with the exact
458475
phases and shutdown time per phase.
459476

460-
Assuming the following custom pod [priority
461-
classes](docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass)
477+
Assuming the following custom pod
478+
[priority classes](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass)
462479
in a cluster,
463480

464481
|Pod priority class name|Pod priority class value|
@@ -492,7 +509,7 @@ shutdownGracePeriodByPodPriority:
492509
shutdownGracePeriodSeconds: 60
493510
```
494511
495-
The above table implies that any pod with priority value >= 100000 will get
512+
The above table implies that any pod with `priority` value >= 100000 will get
496513
just 10 seconds to stop, any pod with value >= 10000 and < 100000 will get 180
497514
seconds to stop, any pod with value >= 1000 and < 10000 will get 120 seconds to stop.
498515
Finally, all other pods will get 60 seconds to stop.
@@ -507,8 +524,8 @@ example, you could instead use these settings:
507524
| 0 |60 seconds |
508525

509526

510-
In the above case, the pods with custom-class-b will go into the same bucket
511-
as custom-class-c for shutdown.
527+
In the above case, the pods with `custom-class-b` will go into the same bucket
528+
as `custom-class-c` for shutdown.
512529

513530
If there are no pods in a particular range, then the kubelet does not wait
514531
for pods in that priority range. Instead, the kubelet immediately skips to the
@@ -577,3 +594,4 @@ see [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) and its
577594
* Read the [Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
578595
section of the architecture design document.
579596
* Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
597+

0 commit comments

Comments
 (0)