Skip to content

Commit 6f1e5a9

Browse files
authored
Merge pull request #51849 from kubernetes/dev-1.34
Official 1.34 Release Docs
2 parents c70953d + 7179839 commit 6f1e5a9

File tree

113 files changed

+2074
-431
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

113 files changed

+2074
-431
lines changed

content/en/docs/concepts/cluster-administration/node-shutdown.md

Lines changed: 43 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -16,30 +16,66 @@ either **graceful** or **non-graceful**.
1616

1717
## Graceful node shutdown {#graceful-node-shutdown}
1818

19-
{{< feature-state feature_gate_name="GracefulNodeShutdown" >}}
20-
2119
The kubelet attempts to detect node system shutdown and terminates pods running on the node.
2220

23-
kubelet ensures that pods follow the normal
21+
Kubelet ensures that pods follow the normal
2422
[pod termination process](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)
2523
during the node shutdown. During node shutdown, the kubelet does not accept new
2624
Pods (even if those Pods are already bound to the node).
2725

26+
### Enabling graceful node shutdown
27+
28+
{{< tabs name="graceful_shutdown_os" >}}
29+
{{% tab name="Linux" %}}
30+
{{< feature-state feature_gate_name="GracefulNodeShutdown" >}}
31+
32+
On Linux, the graceful node shutdown feature is controlled with the `GracefulNodeShutdown`
33+
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) which is
34+
enabled by default in 1.21.
35+
36+
{{< note >}}
2837
The graceful node shutdown feature depends on systemd since it takes advantage of
2938
[systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to
3039
delay the node shutdown with a given duration.
40+
{{</ note >}}
41+
{{% /tab %}}
3142

32-
Graceful node shutdown is controlled with the `GracefulNodeShutdown`
33-
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) which is
34-
enabled by default in 1.21.
43+
{{% tab name="Windows" %}}
44+
{{< feature-state feature_gate_name="WindowsGracefulNodeShutdown" >}}
45+
46+
On Windows, the graceful node shutdown feature is controlled with the `WindowsGracefulNodeShutdown`
47+
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
48+
which is introduced in 1.32 as an alpha feature. In Kubernetes 1.34 the feature is Beta
49+
and is enabled by default.
50+
51+
{{< note >}}
52+
The Windows graceful node shutdown feature depends on kubelet running as a Windows service,
53+
it will then have a registered [service control handler](https://learn.microsoft.com/en-us/windows/win32/services/service-control-handler-function)
54+
to delay the preshutdown event with a given duration.
55+
{{</ note >}}
56+
57+
Windows graceful node shutdown can not be cancelled.
58+
59+
If kubelet is not running as a Windows service, it will not be able to set and monitor
60+
the [Preshutdown](https://learn.microsoft.com/en-us/windows/win32/api/winsvc/ns-winsvc-service_preshutdown_info) event,
61+
the node will have to go through the [Non-Graceful Node Shutdown](#non-graceful-node-shutdown) procedure mentioned above.
62+
63+
In the case where the Windows graceful node shutdown feature is enabled, but the kubelet is not
64+
running as a Windows service, the kubelet will continue running instead of failing. However,
65+
it will log an error indicating that it needs to be run as a Windows service.
66+
{{% /tab %}}
67+
68+
{{< /tabs >}}
69+
70+
### Configuring graceful node shutdown
3571

3672
Note that by default, both configuration options described below,
3773
`shutdownGracePeriod` and `shutdownGracePeriodCriticalPods`, are set to zero,
3874
thus not activating the graceful node shutdown functionality.
3975
To activate the feature, both options should be configured appropriately and
4076
set to non-zero values.
4177

42-
Once systemd detects or is notified of a node shutdown, the kubelet sets a `NotReady` condition on
78+
Once the kubelet is notified of a node shutdown, it sets a `NotReady` condition on
4379
the Node, with the `reason` set to `"node is shutting down"`. The kube-scheduler honors this condition
4480
and does not schedule any Pods onto the affected node; other third-party schedulers are
4581
expected to follow the same logic. This means that new Pods won't be scheduled onto that node
@@ -273,28 +309,6 @@ via the [Non-Graceful Node Shutdown](#non-graceful-node-shutdown) procedure ment
273309

274310
{{< /note >}}
275311

276-
## Windows Graceful node shutdown {#windows-graceful-node-shutdown}
277-
278-
{{< feature-state feature_gate_name="WindowsGracefulNodeShutdown" >}}
279-
280-
The Windows graceful node shutdown feature depends on kubelet running as a Windows service,
281-
it will then have a registered [service control handler](https://learn.microsoft.com/en-us/windows/win32/services/service-control-handler-function)
282-
to delay the preshutdown event with a given duration.
283-
284-
Windows graceful node shutdown is controlled with the `WindowsGracefulNodeShutdown`
285-
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
286-
which is introduced in 1.32 as an alpha feature.
287-
288-
Windows graceful node shutdown can not be cancelled.
289-
290-
If kubelet is not running as a Windows service, it will not be able to set and monitor
291-
the [Preshutdown](https://learn.microsoft.com/en-us/windows/win32/api/winsvc/ns-winsvc-service_preshutdown_info) event,
292-
the node will have to go through the [Non-Graceful Node Shutdown](#non-graceful-node-shutdown) procedure mentioned above.
293-
294-
In the case where the Windows graceful node shutdown feature is enabled, but the kubelet is not
295-
running as a Windows service, the kubelet will continue running instead of failing. However,
296-
it will log an error indicating that it needs to be run as a Windows service.
297-
298312
## {{% heading "whatsnext" %}}
299313

300314
Learn more about the following:

content/en/docs/concepts/cluster-administration/system-metrics.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -179,11 +179,11 @@ flag to expose these alpha stability metrics.
179179

180180
### kubelet Pressure Stall Information (PSI) metrics
181181

182-
{{< feature-state for_k8s_version="v1.33" state="alpha" >}}
182+
{{< feature-state for_k8s_version="v1.34" state="beta" >}}
183183

184-
As an alpha feature, Kubernetes lets you configure kubelet to collect Linux kernel
184+
As a beta feature, Kubernetes lets you configure kubelet to collect Linux kernel
185185
[Pressure Stall Information](https://docs.kernel.org/accounting/psi.html)
186-
(PSI) for CPU, memory and IO usage.
186+
(PSI) for CPU, memory and I/O usage.
187187
The information is collected at node, pod and container level.
188188
The metrics are exposed at the `/metrics/cadvisor` endpoint with the following names:
189189

@@ -196,10 +196,11 @@ container_pressure_io_stalled_seconds_total
196196
container_pressure_io_waiting_seconds_total
197197
```
198198

199-
You must enable the `KubeletPSI` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
200-
to use this feature. The information is also exposed in the
199+
This feature is enabled by default, by setting the `KubeletPSI` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/). The information is also exposed in the
201200
[Summary API](/docs/reference/instrumentation/node-metrics#psi).
202201

202+
You can learn how to interpret the PSI metrics in [Understand PSI Metrics](/docs/reference/instrumentation/understand-psi-metrics/).
203+
203204
#### Requirements
204205

205206
Pressure Stall Information requires:

content/en/docs/concepts/cluster-administration/system-traces.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -78,15 +78,15 @@ with `--tracing-config-file=<path-to-config>`. This is an example config that re
7878
spans for 1 in 10000 requests, and uses the default OpenTelemetry endpoint:
7979

8080
```yaml
81-
apiVersion: apiserver.config.k8s.io/v1beta1
81+
apiVersion: apiserver.config.k8s.io/v1
8282
kind: TracingConfiguration
8383
# default value
8484
#endpoint: localhost:4317
8585
samplingRatePerMillion: 100
8686
```
8787

8888
For more information about the `TracingConfiguration` struct, see
89-
[API server config API (v1beta1)](/docs/reference/config-api/apiserver-config.v1beta1/#apiserver-k8s-io-v1beta1-TracingConfiguration).
89+
[API server config API (v1)](/docs/reference/config-api/apiserver-config.v1/#apiserver-k8s-io-v1-TracingConfiguration).
9090

9191
### kubelet traces
9292

@@ -106,8 +106,6 @@ This is an example snippet of a kubelet config that records spans for 1 in 10000
106106
```yaml
107107
apiVersion: kubelet.config.k8s.io/v1beta1
108108
kind: KubeletConfiguration
109-
featureGates:
110-
KubeletTracing: true
111109
tracing:
112110
# default value
113111
#endpoint: localhost:4317

content/en/docs/concepts/configuration/manage-resources-containers.md

Lines changed: 17 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -113,21 +113,24 @@ resource requests/limits of that type for each container in the Pod.
113113

114114
{{< feature-state feature_gate_name="PodLevelResources" >}}
115115

116-
Starting in Kubernetes 1.32, you can also specify resource requests and limits at
116+
Provided your cluster has the `PodLevelResources`
117+
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) enabled,
118+
you can specify resource requests and limits at
117119
the Pod level. At the Pod level, Kubernetes {{< skew currentVersion >}}
118120
only supports resource requests or limits for specific resource types: `cpu` and /
119-
or `memory`. This feature is currently in alpha and with the feature enabled,
120-
Kubernetes allows you to declare an overall resource budget for the Pod, which is
121-
especially helpful when dealing with a large number of containers where it can be
122-
difficult to accurately gauge individual resource needs. Additionally, it enables
123-
containers within a Pod to share idle resources with each other, improving resource
124-
utilization.
121+
or `memory` and / or `hugepages`. With this feature, Kubernetes allows you to declare an overall resource
122+
budget for the Pod, which is especially helpful when dealing with a large number of
123+
containers where it can be difficult to accurately gauge individual resource needs.
124+
Additionally, it enables containers within a Pod to share idle resources with each
125+
other, improving resource utilization.
125126

126127
For a Pod, you can specify resource limits and requests for CPU and memory by including the following:
127128
* `spec.resources.limits.cpu`
128129
* `spec.resources.limits.memory`
130+
* `spec.resources.limits.hugepages-<size>`
129131
* `spec.resources.requests.cpu`
130132
* `spec.resources.requests.memory`
133+
* `spec.resources.requests.hugepages-<size>`
131134

132135
## Resource units in Kubernetes
133136

@@ -718,6 +721,12 @@ extender.
718721
}
719722
```
720723

724+
#### Extended resources allocation by DRA
725+
Extended resources allocation by DRA allows cluster administrators to specify an `extendedResourceName`
726+
in DeviceClass, then the devices matching the DeviceClass can be requested from a pod's extended
727+
resource requests. Read more about
728+
[Extended Resource allocation by DRA](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/#extended-resource).
729+
721730
### Consuming extended resources
722731

723732
Users can consume extended resources in Pod specs like CPU and memory.
@@ -934,3 +943,4 @@ memory limit (and possibly request) for that container.
934943
* Read about [project quotas](https://www.linux.org/docs/man8/xfs_quota.html) in XFS
935944
* Read more about the [kube-scheduler configuration reference (v1)](/docs/reference/config-api/kube-scheduler-config.v1/)
936945
* Read more about [Quality of Service classes for Pods](/docs/concepts/workloads/pods/pod-qos/)
946+
* Read more about [Extended Resource allocation by DRA](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/#extended-resource)

content/en/docs/concepts/containers/container-lifecycle-hooks.md

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -64,13 +64,6 @@ There are three types of hook handlers that can be implemented for Containers:
6464
Resources consumed by the command are counted against the Container.
6565
* HTTP - Executes an HTTP request against a specific endpoint on the Container.
6666
* Sleep - Pauses the container for a specified duration.
67-
This is a beta-level feature default enabled by the `PodLifecycleSleepAction`
68-
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
69-
70-
{{< note >}}
71-
The beta level `PodLifecycleSleepActionAllowZero` feature gate which is enabled by default from v1.33.
72-
It allows you to set a sleep duration of zero seconds (effectively a no-op) for your Sleep lifecycle hooks.
73-
{{< /note >}}
7467

7568
### Hook handler execution
7669

content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md

Lines changed: 12 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -270,11 +270,12 @@ the NUMA node where these devices are allocated. Also, for NUMA-based machines,
270270
information about memory and hugepages reserved for a container.
271271

272272
Starting from Kubernetes v1.27, the `List` endpoint can provide information on resources
273-
of running pods allocated in `ResourceClaims` by the `DynamicResourceAllocation` API. To enable
274-
this feature `kubelet` must be started with the following flags:
273+
of running pods allocated in `ResourceClaims` by the `DynamicResourceAllocation` API.
274+
Starting from Kubernetes v1.34, this feature is enabled by default.
275+
To disable, `kubelet` must be started with the following flags:
275276

276277
```
277-
--feature-gates=DynamicResourceAllocation=true,KubeletPodResourcesDynamicResources=true
278+
--feature-gates=KubeletPodResourcesDynamicResources=false
278279
```
279280

280281
```gRPC
@@ -414,7 +415,7 @@ will continue working.
414415

415416
### `Get` gRPC endpoint {#grpc-endpoint-get}
416417

417-
{{< feature-state state="alpha" for_k8s_version="v1.27" >}}
418+
{{< feature-state state="beta" for_k8s_version="v1.34" >}}
418419

419420
The `Get` endpoint provides information on resources of a running Pod. It exposes information
420421
similar to those described in the `List` endpoint. The `Get` endpoint requires `PodName`
@@ -428,18 +429,19 @@ message GetPodResourcesRequest {
428429
}
429430
```
430431

431-
To enable this feature, you must start your kubelet services with the following flag:
432+
To disable this feature, you must start your kubelet services with the following flag:
432433

433434
```
434-
--feature-gates=KubeletPodResourcesGet=true
435+
--feature-gates=KubeletPodResourcesGet=false
435436
```
436437

437438
The `Get` endpoint can provide Pod information related to dynamic resources
438-
allocated by the dynamic resource allocation API. To enable this feature, you must
439-
ensure your kubelet services are started with the following flags:
439+
allocated by the dynamic resource allocation API.
440+
Starting from Kubernetes v1.34, this feature is enabled by default.
441+
To disable, `kubelet` must be started with the following flags:
440442

441443
```
442-
--feature-gates=KubeletPodResourcesGet=true,DynamicResourceAllocation=true,KubeletPodResourcesDynamicResources=true
444+
--feature-gates=KubeletPodResourcesDynamicResources=false
443445
```
444446

445447
## Device plugin integration with the Topology Manager
@@ -509,3 +511,4 @@ Here are some examples of device plugin implementations:
509511
* Learn about the [Topology Manager](/docs/tasks/administer-cluster/topology-manager/)
510512
* Read about using [hardware acceleration for TLS ingress](/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/)
511513
with Kubernetes
514+
* Read more about [Extended Resource allocation by DRA](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/#extended-resource)

content/en/docs/concepts/policy/node-resource-managers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -207,7 +207,7 @@ listed in alphabetical order:
207207
: Prevent all the pods regardless of their Quality of Service class to run on reserved CPUs
208208
(available since Kubernetes v1.32)
209209

210-
`prefer-align-cpus-by-uncorecache` (alpha, hidden by default)
210+
`prefer-align-cpus-by-uncorecache` (beta, visible by default)
211211
: Align CPUs by uncore (Last-Level) cache boundary on a best-effort way
212212
(available since Kubernetes v1.32)
213213

content/en/docs/concepts/scheduling-eviction/assign-pod-node.md

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -641,6 +641,30 @@ spec:
641641

642642
The above Pod will only run on the node `kube-01`.
643643

644+
## nominatedNodeName
645+
646+
{{< feature-state feature_gate_name="NominatedNodeNameForExpectation" >}}
647+
648+
`nominatedNodeName` can be used for external components to nominate node for a pending pod.
649+
This nomination is best effort: it might be ignored if the scheduler determines the pod cannot go to a nominated node.
650+
651+
Also, this field can be (over)written by the scheduler:
652+
- If the scheduler finds a node to nominate via the preemption.
653+
- If the scheduler decides where the pod is going, and move it to the binding cycle.
654+
- Note that, in this case, `nominatedNodeName` is put only when the pod has to go through `WaitOnPermit` or `PreBind` extension points.
655+
656+
Here is an example of a Pod status using the `nominatedNodeName` field:
657+
658+
```yaml
659+
apiVersion: v1
660+
kind: Pod
661+
metadata:
662+
name: nginx
663+
...
664+
status:
665+
nominatedNodeName: kube-01
666+
```
667+
644668
## Pod topology spread constraints
645669

646670
You can use _topology spread constraints_ to control how {{< glossary_tooltip text="Pods" term_id="Pod" >}}

0 commit comments

Comments
 (0)