Skip to content

Commit 1332f3f

Browse files
authored
Merge pull request #48693 from michellengnx/merged-main-dev-1.32
Merge main branch into dev-1.32
2 parents d00b46e + d021207 commit 1332f3f

File tree

35 files changed

+2469
-176
lines changed

35 files changed

+2469
-176
lines changed

content/de/docs/contribute/participate/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ Entweder durch Auflistung einzelner GitHub-Benutzernamen oder GitHub-Gruppen.
7474
Die Kombination aus OWNERS-Dateien und Front-Matter in Markdown-Dateien bestimmt, welche Empfehlungen PR-Eigentümer von automatisierten Systemen erhalten, und wen sie um eine technische und redaktionelle Überprüfung ihres PRs bitten sollen.
7575
## So funktioniert das Zusammenführen
7676

77-
Wenn ein Pull Request mit der Branch (Ast) zusammengeführt wird, in dem der Inhalt bereitgestellt werden soll, wird dieser Inhalt auf http://kubernetes.io veröffentlicht. Um sicherzustellen, dass die Qualität der veröffentlichten Inhalte hoch ist, beschränken wir das Zusammenführen von Pull Requests auf
77+
Wenn ein Pull Request mit der Branch (Ast) zusammengeführt wird, in dem der Inhalt bereitgestellt werden soll, wird dieser Inhalt auf https://kubernetes.io veröffentlicht. Um sicherzustellen, dass die Qualität der veröffentlichten Inhalte hoch ist, beschränken wir das Zusammenführen von Pull Requests auf
7878
SIG Docs Freigabeberechtigte. So funktioniert es:
7979

8080
- Wenn eine Pull-Anfrage sowohl das `lgtm`- als auch das `approve`-Label hat, kein `hold`-Label hat,
Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
---
2+
layout: blog
3+
title: 'Kubernetes v1.32 sneak peek'
4+
date: 2024-11-08
5+
slug: kubernetes-1-32-upcoming-changes
6+
author: >
7+
Matteo Bianchi,
8+
Edith Puclla,
9+
William Rizzo,
10+
Ryota Sawada,
11+
Rashan Smith
12+
---
13+
14+
As we get closer to the release date for Kubernetes v1.32, the project develops and matures. Features may be deprecated, removed, or replaced with better ones for the project's overall health.
15+
16+
This blog outlines some of the planned changes for the Kubernetes v1.32 release, that the release team feels you should be aware of, for the continued maintenance of your Kubernetes environment and keeping up to date with the latest changes. Information listed below is based on the current status of the v1.32 release and may change before the actual release date.
17+
18+
### The Kubernetes API removal and deprecation process
19+
The Kubernetes project has a well-documented [deprecation policy](/docs/reference/using-api/deprecation-policy/) for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that API is available and that APIs have a minimum lifetime for each stability level. A deprecated API has been marked for removal in a future Kubernetes release will continue to function until removal (at least one year from the deprecation). Its usage will result in a warning being displayed. Removed APIs are no longer available in the current version, so you must migrate to use the replacement instead.
20+
21+
* Generally available (GA) or stable API versions may be marked as deprecated but must not be removed within a major version of Kubernetes.
22+
23+
* Beta or pre-release API versions must be supported for 3 releases after the deprecation.
24+
25+
* Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place.
26+
27+
Whether an API is removed due to a feature graduating from beta to stable or because that API did not succeed, all removals comply with this deprecation policy. Whenever an API is removed, migration options are communicated in the [deprecation guide](/docs/reference/using-api/deprecation-guide/).
28+
29+
## Note on the withdrawal of the old DRA implementation
30+
31+
The enhancement [#3063](https://github.com/kubernetes/enhancements/issues/3063) introduced Dynamic Resource Allocation (DRA) in Kubernetes 1.26.
32+
33+
However, in Kubernetes v1.32, this approach to DRA will be significantly changed. Code related to the original implementation will be removed, leaving KEP [#4381](https://github.com/kubernetes/enhancements/issues/4381) as the "new" base functionality.
34+
35+
The decision to change the existing approach originated from its incompatibility with cluster autoscaling as resource availability was non-transparent, complicating decision-making for both Cluster Autoscaler and controllers.
36+
The newly added Structured Parameter model substitutes the functionality.
37+
38+
This removal will allow Kubernetes to handle new hardware requirements and resource claims more predictably, bypassing the complexities of back and forth API calls to the kube-apiserver.
39+
40+
Please also see the enhancement issue [#3063](https://github.com/kubernetes/enhancements/issues/3063) to find out more.
41+
42+
## API removal
43+
44+
There is only a single API removal planned for [Kubernetes v1.32](/docs/reference/using-api/deprecation-guide/#v1-32):
45+
46+
* The `flowcontrol.apiserver.k8s.io/v1beta3` API version of FlowSchema and PriorityLevelConfiguration has been removed.
47+
To prepare for this, you can edit your existing manifests and rewrite client software to use the `flowcontrol.apiserver.k8s.io/v1 API` version, available since v1.29.
48+
All existing persisted objects are accessible via the new API. Notable changes in flowcontrol.apiserver.k8s.io/v1beta3 include that the PriorityLevelConfiguration `spec.limited.nominalConcurrencyShares` field only defaults to 30 when unspecified, and an explicit value of 0 is not changed to 30.
49+
50+
For more information, please refer to the [API deprecation guide](/docs/reference/using-api/deprecation-guide/#v1-32).
51+
52+
## Sneak peek of Kubernetes v1.32
53+
54+
The following list of enhancements is likely to be included in the v1.32 release. This is not a commitment and the release content is subject to change.
55+
56+
### Even more DRA enhancements!
57+
58+
In this release, like the previous one, the Kubernetes project continues proposing a number of enhancements to the Dynamic Resource Allocation (DRA), a key component of the Kubernetes resource management system. These enhancements aim to improve the flexibility and efficiency of resource allocation for workloads that require specialized hardware, such as GPUs, FPGAs and network adapters. This release introduces improvements, including the addition of resource health status in the Pod status, as outlined in KEP [#4680](https://github.com/kubernetes/enhancements/issues/4680).
59+
60+
#### Add resource health status to the Pod status
61+
62+
It isn't easy to know when a Pod uses a device that has failed or is temporarily unhealthy.
63+
KEP [#4680](https://github.com/kubernetes/enhancements/issues/4680) proposes exposing device health via Pod `status`, making troubleshooting of Pod crashes easier.
64+
65+
### Windows strikes back!
66+
67+
KEP [#4802](https://github.com/kubernetes/enhancements/issues/4802) adds support for graceful shutdowns of Windows nodes in Kubernetes clusters.
68+
Before this release, Kubernetes provided graceful node shutdown functionality for Linux nodes but lacked equivalent support for Windows. This enhancement enables the kubelet on Windows nodes to handle system shutdown events properly. Doing so, it ensures that Pods running on Windows nodes are gracefully terminated, allowing workloads to be rescheduled without disruption. This improvement enhances the reliability and stability of clusters that include Windows nodes, especially during a planned maintenance or any system updates.
69+
70+
### Allow special characters in environment variables
71+
72+
With the graduation of this [enhancement](https://github.com/kubernetes/enhancements/issues/4369) to beta, Kubernetes now allows almost all printable ASCII characters (excluding "=") to be used as environment variable names. This change addresses the limitations previously imposed on variable naming, facilitating a broader adoption of Kubernetes by accommodating various application needs. The relaxed validation will be enabled by default via the `RelaxedEnvironmentVariableValidation` feature gate, ensuring that users can easily utilize environment variables without strict constraints, enhancing flexibility for developers working with applications like .NET Core that require special characters in their configurations.
73+
74+
### Make Kubernetes aware of the LoadBalancer behavior
75+
76+
KEP [#1860](https://github.com/kubernetes/enhancements/issues/1860) graduates to GA, introducing the `ipMode` field for a Service of `type: LoadBalancer`, which can be set to either `"VIP"` or `"Proxy"`. This enhancement is aimed at improving how cloud providers load balancers interact with kube-proxy and it is a change transparent to the end user. The existing behavior of kube-proxy is preserved when using `"VIP"`, where kube-proxy handles the load balancing. Using `"Proxy"` results in traffic sent directly to the load balancer, providing cloud providers greater control over relying on kube-proxy; this means that you could see an improvement in the performance of your load balancer for some cloud providers.
77+
78+
### Retry generate name for resources
79+
This [enhancement](https://github.com/kubernetes/enhancements/issues/4420) improves how name conflicts are handled for Kubernetes resources created with the `generateName` field. Previously, if a name conflict occurred, the API server returned a 409 HTTP Conflict error and clients had to manually retry the request. With this update, the API server automatically retries generating a new name up to seven times in case of a conflict. This significantly reduces the chances of collision, ensuring smooth generation of up to 1 million names with less than a 0.1% probability of a conflict, providing more resilience for large-scale workloads.
80+
81+
## Want to know more?
82+
New features and deprecations are also announced in the Kubernetes release notes. We will formally announce what's new in [Kubernetes v1.32](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.32.md) as part of the CHANGELOG for this release.
83+
84+
You can see the announcements of changes in the release notes for:
85+
86+
* [Kubernetes v1.31](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md)
87+
88+
* [Kubernetes v1.30](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md)
89+
90+
* [Kubernetes v1.29](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md)
91+
92+
* [Kubernetes v1.28](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md)

content/en/docs/concepts/configuration/liveness-readiness-startup-probes.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -16,22 +16,21 @@ Kubernetes has various types of probes:
1616

1717
## Liveness probe
1818

19-
Liveness probes determine when to restart a container. For example, liveness probes could catch a deadlock, when an application is running, but unable to make progress.
19+
Liveness probes determine when to restart a container. For example, liveness probes could catch a deadlock when an application is running but unable to make progress.
2020

2121
If a container fails its liveness probe repeatedly, the kubelet restarts the container.
2222

23-
Liveness probes do not wait for readiness probes to succeed. If you want to wait before
24-
executing a liveness probe you can either define `initialDelaySeconds`, or use a
23+
Liveness probes do not wait for readiness probes to succeed. If you want to wait before executing a liveness probe, you can either define `initialDelaySeconds` or use a
2524
[startup probe](#startup-probe).
2625

2726

2827
## Readiness probe
2928

30-
Readiness probes determine when a container is ready to start accepting traffic. This is useful when waiting for an application to perform time-consuming initial tasks, such as establishing network connections, loading files, and warming caches.
29+
Readiness probes determine when a container is ready to start accepting traffic. This is useful when waiting for an application to perform time-consuming initial tasks, such as establishing network connections, loading files, and warming caches.
3130

3231
If the readiness probe returns a failed state, Kubernetes removes the pod from all matching service endpoints.
3332

34-
Readiness probes runs on the container during its whole lifecycle.
33+
Readiness probes run on the container during its whole lifecycle.
3534

3635

3736
## Startup probe

content/en/docs/concepts/configuration/manage-resources-containers.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@ On Linux, the container runtime typically configures
199199
kernel {{< glossary_tooltip text="cgroups" term_id="cgroup" >}} that apply and enforce the
200200
limits you defined.
201201
202-
- The CPU limit defines a hard ceiling on how much CPU time that the container can use.
202+
- The CPU limit defines a hard ceiling on how much CPU time the container can use.
203203
During each scheduling interval (time slice), the Linux kernel checks to see if this
204204
limit is exceeded; if so, the kernel waits before allowing that cgroup to resume execution.
205205
- The CPU request typically defines a weighting. If several different containers (cgroups)
@@ -244,30 +244,30 @@ directly or from your monitoring tools.
244244
If you do not specify a `sizeLimit` for an `emptyDir` volume, that volume may
245245
consume up to that pod's memory limit (`Pod.spec.containers[].resources.limits.memory`).
246246
If you do not set a memory limit, the pod has no upper bound on memory consumption,
247-
and can consume all available memory on the node. Kubernetes schedules pods based
247+
and can consume all available memory on the node. Kubernetes schedules pods based
248248
on resource requests (`Pod.spec.containers[].resources.requests`) and will not
249249
consider memory usage above the request when deciding if another pod can fit on
250-
a given node. This can result in a denial of service and cause the OS to do
251-
out-of-memory (OOM) handling. It is possible to create any number of `emptyDir`s
250+
a given node. This can result in a denial of service and cause the OS to do
251+
out-of-memory (OOM) handling. It is possible to create any number of `emptyDir`s
252252
that could potentially consume all available memory on the node, making OOM
253253
more likely.
254254
{{< /caution >}}
255255

256256
From the perspective of memory management, there are some similarities between
257257
when a process uses memory as a work area and when using memory-backed
258-
`emptyDir`. But when using memory as a volume like memory-backed `emptyDir`,
259-
there are additional points below that you should be careful of.
258+
`emptyDir`. But when using memory as a volume, like memory-backed `emptyDir`,
259+
there are additional points below that you should be careful of:
260260

261261
* Files stored on a memory-backed volume are almost entirely managed by the
262-
user application. Unlike when used as a work area for a process, you can not
262+
user application. Unlike when used as a work area for a process, you can not
263263
rely on things like language-level garbage collection.
264264
* The purpose of writing files to a volume is to save data or pass it between
265-
applications. Neither Kubernetes nor the OS may automatically delete files
265+
applications. Neither Kubernetes nor the OS may automatically delete files
266266
from a volume, so memory used by those files can not be reclaimed when the
267267
system or the pod are under memory pressure.
268268
* A memory-backed `emptyDir` is useful because of its performance, but memory
269269
is generally much smaller in size and much higher in cost than other storage
270-
media, such as disks or SSDs. Using large amounts of memory for `emptyDir`
270+
media, such as disks or SSDs. Using large amounts of memory for `emptyDir`
271271
volumes may affect the normal operation of your pod or of the whole node,
272272
so should be used carefully.
273273

0 commit comments

Comments
 (0)