Skip to content

Commit 0471ca1

Browse files
authored
Merge pull request #44710 from kubernetes/dev-1.30
Official 1.30 Release Docs
2 parents 13dd6a8 + 344254b commit 0471ca1

File tree

108 files changed

+2342
-450
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

108 files changed

+2342
-450
lines changed

content/en/docs/concepts/architecture/garbage-collection.md

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ until disk usage reaches the `LowThresholdPercent` value.
141141

142142
{{< feature-state feature_gate_name="ImageMaximumGCAge" >}}
143143

144-
As an alpha feature, you can specify the maximum time a local image can be unused for,
144+
As a beta feature, you can specify the maximum time a local image can be unused for,
145145
regardless of disk usage. This is a kubelet setting that you configure for each node.
146146

147147
To configure the setting, enable the `ImageMaximumGCAge`
@@ -151,6 +151,13 @@ and also set a value for the `ImageMaximumGCAge` field in the kubelet configurat
151151
The value is specified as a Kubernetes _duration_; for example, you can set the configuration
152152
field to `3d12h`, which means 3 days and 12 hours.
153153

154+
{{< note >}}
155+
This feature does not track image usage across kubelet restarts. If the kubelet
156+
is restarted, the tracked image age is reset, causing the kubelet to wait the full
157+
`ImageMaximumGCAge` duration before qualifying images for garbage collection
158+
based on image age.
159+
{{< /note>}}
160+
154161
### Container garbage collection {#container-image-garbage-collection}
155162

156163
The kubelet garbage collects unused containers based on the following variables,

content/en/docs/concepts/architecture/nodes.md

Lines changed: 35 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -516,14 +516,44 @@ During a non-graceful shutdown, Pods are terminated in the two phases:
516516
recovered since the user was the one who originally added the taint.
517517
{{< /note >}}
518518

519+
### Forced storage detach on timeout {#storage-force-detach-on-timeout}
520+
521+
In any situation where a pod deletion has not succeeded for 6 minutes, kubernetes will
522+
force detach volumes being unmounted if the node is unhealthy at that instant. Any
523+
workload still running on the node that uses a force-detached volume will cause a
524+
violation of the
525+
[CSI specification](https://github.com/container-storage-interface/spec/blob/master/spec.md#controllerunpublishvolume),
526+
which states that `ControllerUnpublishVolume` "**must** be called after all
527+
`NodeUnstageVolume` and `NodeUnpublishVolume` on the volume are called and succeed".
528+
In such circumstances, volumes on the node in question might encounter data corruption.
529+
530+
The forced storage detach behaviour is optional; users might opt to use the "Non-graceful
531+
node shutdown" feature instead.
532+
533+
Force storage detach on timeout can be disabled by setting the `disable-force-detach-on-timeout`
534+
config field in `kube-controller-manager`. Disabling the force detach on timeout feature means
535+
that a volume that is hosted on a node that is unhealthy for more than 6 minutes will not have
536+
its associated
537+
[VolumeAttachment](/docs/reference/kubernetes-api/config-and-storage-resources/volume-attachment-v1/)
538+
deleted.
539+
540+
After this setting has been applied, unhealthy pods still attached to a volumes must be recovered
541+
via the [Non-Graceful Node Shutdown](#non-graceful-node-shutdown) procedure mentioned above.
542+
543+
{{< note >}}
544+
- Caution must be taken while using the [Non-Graceful Node Shutdown](#non-graceful-node-shutdown) procedure.
545+
- Deviation from the steps documented above can result in data corruption.
546+
{{< /note >}}
547+
519548
## Swap memory management {#swap-memory}
520549

521550
{{< feature-state feature_gate_name="NodeSwap" >}}
522551

523552
To enable swap on a node, the `NodeSwap` feature gate must be enabled on
524-
the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn`
553+
the kubelet (default is true), and the `--fail-swap-on` command line flag or `failSwapOn`
525554
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/)
526-
must be set to false.
555+
must be set to false.
556+
To allow Pods to utilize swap, `swapBehavior` should not be set to `NoSwap` (which is the default behavior) in the kubelet config.
527557

528558
{{< warning >}}
529559
When the memory swap feature is turned on, Kubernetes data such as the content
@@ -535,17 +565,16 @@ specify how a node will use swap memory. For example,
535565

536566
```yaml
537567
memorySwap:
538-
swapBehavior: UnlimitedSwap
568+
swapBehavior: LimitedSwap
539569
```
540570

541-
- `UnlimitedSwap` (default): Kubernetes workloads can use as much swap memory as they
542-
request, up to the system limit.
571+
- `NoSwap` (default): Kubernetes workloads will not use swap.
543572
- `LimitedSwap`: The utilization of swap memory by Kubernetes workloads is subject to limitations.
544573
Only Pods of Burstable QoS are permitted to employ swap.
545574

546575
If configuration for `memorySwap` is not specified and the feature gate is
547576
enabled, by default the kubelet will apply the same behaviour as the
548-
`UnlimitedSwap` setting.
577+
`NoSwap` setting.
549578

550579
With `LimitedSwap`, Pods that do not fall under the Burstable QoS classification (i.e.
551580
`BestEffort`/`Guaranteed` Qos Pods) are prohibited from utilizing swap memory.

content/en/docs/concepts/cluster-administration/logging.md

Lines changed: 28 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -108,6 +108,15 @@ using the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-co
108108
These settings let you configure the maximum size for each log file and the maximum number of
109109
files allowed for each container respectively.
110110

111+
In order to perform an efficient log rotation in clusters where the volume of the logs generated by
112+
the workload is large, kubelet also provides a mechanism to tune how the logs are rotated in
113+
terms of how many concurrent log rotations can be performed and the interval at which the logs are
114+
monitored and rotated as required.
115+
You can configure two kubelet [configuration settings](/docs/reference/config-api/kubelet-config.v1beta1/),
116+
`containerLogMaxWorkers` and `containerLogMonitorInterval` using the
117+
[kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/).
118+
119+
111120
When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) as in
112121
the basic logging example, the kubelet on the node handles the request and
113122
reads directly from the log file. The kubelet returns the content of the log file.
@@ -148,7 +157,7 @@ If systemd is not present, the kubelet and container runtime write to `.log` fil
148157
run the kubelet via a helper tool, `kube-log-runner`, and use that tool to redirect
149158
kubelet logs to a directory that you choose.
150159

151-
The kubelet always directs your container runtime to write logs into directories within
160+
By default, kubelet directs your container runtime to write logs into directories within
152161
`/var/log/pods`.
153162

154163
For more information on `kube-log-runner`, read [System Logs](/docs/concepts/cluster-administration/system-logs/#klog).
@@ -166,7 +175,7 @@ If you want to have logs written elsewhere, you can indirectly
166175
run the kubelet via a helper tool, `kube-log-runner`, and use that tool to redirect
167176
kubelet logs to a directory that you choose.
168177

169-
However, the kubelet always directs your container runtime to write logs within the
178+
However, by default, kubelet directs your container runtime to write logs within the
170179
directory `C:\var\log\pods`.
171180

172181
For more information on `kube-log-runner`, read [System Logs](/docs/concepts/cluster-administration/system-logs/#klog).
@@ -180,6 +189,22 @@ the `/var/log` directory, bypassing the default logging mechanism (the component
180189
do not write to the systemd journal). You can use Kubernetes' storage mechanisms
181190
to map persistent storage into the container that runs the component.
182191

192+
Kubelet allows changing the pod logs directory from default `/var/log/pods`
193+
to a custom path. This adjustment can be made by configuring the `podLogsDir`
194+
parameter in the kubelet's configuration file.
195+
196+
{{< caution >}}
197+
It's important to note that the default location `/var/log/pods` has been in use for
198+
an extended period and certain processes might implicitly assume this path.
199+
Therefore, altering this parameter must be approached with caution and at your own risk.
200+
201+
Another caveat to keep in mind is that the kubelet supports the location being on the same
202+
disk as `/var`. Otherwise, if the logs are on a separate filesystem from `/var`,
203+
then the kubelet will not track that filesystem's usage, potentially leading to issues if
204+
it fills up.
205+
206+
{{< /caution >}}
207+
183208
For details about etcd and its logs, view the [etcd documentation](https://etcd.io/docs/).
184209
Again, you can use Kubernetes' storage mechanisms to map persistent storage into
185210
the container that runs the component.
@@ -200,7 +225,7 @@ as your responsibility.
200225

201226
## Cluster-level logging architectures
202227

203-
While Kubernetes does not provide a native solution for cluster-level logging, there are
228+
While Kubernetes does not provide a native solution for cluster-level logging, there are
204229
several common approaches you can consider. Here are some options:
205230

206231
* Use a node-level logging agent that runs on every node.

content/en/docs/concepts/cluster-administration/system-logs.md

Lines changed: 14 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ second line.}
122122

123123
### Contextual Logging
124124

125-
{{< feature-state for_k8s_version="v1.24" state="alpha" >}}
125+
{{< feature-state for_k8s_version="v1.30" state="beta" >}}
126126

127127
Contextual logging builds on top of structured logging. It is primarily about
128128
how developers use logging calls: code based on that concept is more flexible
@@ -133,8 +133,9 @@ If developers use additional functions like `WithValues` or `WithName` in
133133
their components, then log entries contain additional information that gets
134134
passed into functions by their caller.
135135

136-
Currently this is gated behind the `StructuredLogging` feature gate and
137-
disabled by default. The infrastructure for this was added in 1.24 without
136+
For Kubernetes {{< skew currentVersion >}}, this is gated behind the `ContextualLogging`
137+
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and is
138+
enabled by default. The infrastructure for this was added in 1.24 without
138139
modifying components. The
139140
[`component-base/logs/example`](https://github.com/kubernetes/kubernetes/blob/v1.24.0-beta.0/staging/src/k8s.io/component-base/logs/example/cmd/logger.go)
140141
command demonstrates how to use the new logging calls and how a component
@@ -147,14 +148,14 @@ $ go run . --help
147148
--feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
148149
AllAlpha=true|false (ALPHA - default=false)
149150
AllBeta=true|false (BETA - default=false)
150-
ContextualLogging=true|false (ALPHA - default=false)
151+
ContextualLogging=true|false (BETA - default=true)
151152
$ go run . --feature-gates ContextualLogging=true
152153
...
153-
I0404 18:00:02.916429 451895 logger.go:94] "example/myname: runtime" foo="bar" duration="1m0s"
154-
I0404 18:00:02.916447 451895 logger.go:95] "example: another runtime" foo="bar" duration="1m0s"
154+
I0222 15:13:31.645988 197901 example.go:54] "runtime" logger="example.myname" foo="bar" duration="1m0s"
155+
I0222 15:13:31.646007 197901 example.go:55] "another runtime" logger="example" foo="bar" duration="1h0m0s" duration="1m0s"
155156
```
156157

157-
The `example` prefix and `foo="bar"` were added by the caller of the function
158+
The `logger` key and `foo="bar"` were added by the caller of the function
158159
which logs the `runtime` message and `duration="1m0s"` value, without having to
159160
modify that function.
160161

@@ -165,8 +166,8 @@ is not in the log output anymore:
165166
```console
166167
$ go run . --feature-gates ContextualLogging=false
167168
...
168-
I0404 18:03:31.171945 452150 logger.go:94] "runtime" duration="1m0s"
169-
I0404 18:03:31.171962 452150 logger.go:95] "another runtime" duration="1m0s"
169+
I0222 15:14:40.497333 198174 example.go:54] "runtime" duration="1m0s"
170+
I0222 15:14:40.497346 198174 example.go:55] "another runtime" duration="1h0m0s" duration="1m0s"
170171
```
171172

172173
### JSON log format
@@ -244,11 +245,11 @@ To help with debugging issues on nodes, Kubernetes v1.27 introduced a feature th
244245
running on the node. To use the feature, ensure that the `NodeLogQuery`
245246
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled for that node, and that the
246247
kubelet configuration options `enableSystemLogHandler` and `enableSystemLogQuery` are both set to true. On Linux
247-
we assume that service logs are available via journald. On Windows we assume that service logs are available
248-
in the application log provider. On both operating systems, logs are also available by reading files within
248+
the assumption is that service logs are available via journald. On Windows the assumption is that service logs are
249+
available in the application log provider. On both operating systems, logs are also available by reading files within
249250
`/var/log/`.
250251

251-
Provided you are authorized to interact with node objects, you can try out this alpha feature on all your nodes or
252+
Provided you are authorized to interact with node objects, you can try out this feature on all your nodes or
252253
just a subset. Here is an example to retrieve the kubelet service logs from a node:
253254

254255
```shell
@@ -293,4 +294,4 @@ kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&patter
293294
* Read about [Contextual Logging](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging)
294295
* Read about [deprecation of klog flags](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
295296
* Read about the [Conventions for logging severity](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)
296-
297+
* Read about [Log Query](https://kep.k8s.io/2258)

content/en/docs/concepts/configuration/configmap.md

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -208,6 +208,42 @@ ConfigMaps consumed as environment variables are not updated automatically and r
208208
A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes#using-subpath) volume mount will not receive ConfigMap updates.
209209
{{< /note >}}
210210

211+
212+
### Using Configmaps as environment variables
213+
214+
To use a Configmap in an {{< glossary_tooltip text="environment variable" term_id="container-env-variables" >}}
215+
in a Pod:
216+
217+
1. For each container in your Pod specification, add an environment variable
218+
for each Configmap key that you want to use to the
219+
`env[].valueFrom.configMapKeyRef` field.
220+
1. Modify your image and/or command line so that the program looks for values
221+
in the specified environment variables.
222+
223+
This is an example of defining a ConfigMap as a pod environment variable:
224+
```yaml
225+
apiVersion: v1
226+
kind: Pod
227+
metadata:
228+
name: env-configmap
229+
spec:
230+
containers:
231+
- name: envars-test-container
232+
image: nginx
233+
env:
234+
- name: CONFIGMAP_USERNAME
235+
valueFrom:
236+
configMapKeyRef:
237+
name: myconfigmap
238+
key: username
239+
240+
```
241+
242+
It's important to note that the range of characters allowed for environment
243+
variable names in pods is [restricted](/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config).
244+
If any keys do not meet the rules, those keys are not made available to your container, though
245+
the Pod is allowed to start.
246+
211247
## Immutable ConfigMaps {#configmap-immutable}
212248

213249
{{< feature-state for_k8s_version="v1.21" state="stable" >}}

content/en/docs/concepts/configuration/secret.md

Lines changed: 4 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -567,25 +567,10 @@ in a Pod:
567567
For instructions, refer to
568568
[Define container environment variables using Secret data](/docs/tasks/inject-data-application/distribute-credentials-secure/#define-container-environment-variables-using-secret-data).
569569

570-
#### Invalid environment variables {#restriction-env-from-invalid}
571-
572-
If your environment variable definitions in your Pod specification are
573-
considered to be invalid environment variable names, those keys aren't made
574-
available to your container. The Pod is allowed to start.
575-
576-
Kubernetes adds an Event with the reason set to `InvalidVariableNames` and a
577-
message that lists the skipped invalid keys. The following example shows a Pod that refers to a Secret named `mysecret`, where `mysecret` contains 2 invalid keys: `1badkey` and `2alsobad`.
578-
579-
```shell
580-
kubectl get events
581-
```
582-
583-
The output is similar to:
584-
585-
```
586-
LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON
587-
0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names.
588-
```
570+
It's important to note that the range of characters allowed for environment variable
571+
names in pods is [restricted](/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config).
572+
If any keys do not meet the rules, those keys are not made available to your container, though
573+
the Pod is allowed to start.
589574

590575
### Container image pull Secrets {#using-imagepullsecrets}
591576

content/en/docs/concepts/containers/container-lifecycle-hooks.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,8 +56,7 @@ There are three types of hook handlers that can be implemented for Containers:
5656
Resources consumed by the command are counted against the Container.
5757
* HTTP - Executes an HTTP request against a specific endpoint on the Container.
5858
* Sleep - Pauses the container for a specified duration.
59-
The "Sleep" action is available when the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
60-
`PodLifecycleSleepAction` is enabled.
59+
This is a beta-level feature default enabled by the `PodLifecycleSleepAction` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
6160

6261
### Hook handler execution
6362

content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -295,6 +295,50 @@ When you add a custom resource, you can access it using:
295295
(generating one is an advanced undertaking, but some projects may provide a client along with
296296
the CRD or AA).
297297

298+
299+
## Custom resource field selectors
300+
301+
[Field Selectors](/docs/concepts/overview/working-with-objects/field-selectors/)
302+
let clients select custom resources based on the value of one or more resource
303+
fields.
304+
305+
All custom resources support the `metadata.name` and `metadata.namespace` field
306+
selectors.
307+
308+
Fields declared in a {{< glossary_tooltip term_id="CustomResourceDefinition" text="CustomResourceDefinition" >}}
309+
may also be used with field selectors when included in the `spec.versions[*].selectableFields` field of the
310+
{{< glossary_tooltip term_id="CustomResourceDefinition" text="CustomResourceDefinition" >}}.
311+
312+
### Selectable fields for custom resources {#crd-selectable-fields}
313+
314+
{{< feature-state feature_gate_name="CustomResourceFieldSelectors" >}}
315+
316+
You need to enable the `CustomResourceFieldSelectors`
317+
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to
318+
use this behavior, which then applies to all CustomResourceDefinitions in your
319+
cluster.
320+
321+
The `spec.versions[*].selectableFields` field of a {{< glossary_tooltip term_id="CustomResourceDefinition" text="CustomResourceDefinition" >}} may be used to
322+
declare which other fields in a custom resource may be used in field selectors.
323+
The following example adds the `.spec.color` and `.spec.size` fields as
324+
selectable fields.
325+
326+
{{% code_sample file="customresourcedefinition/shirt-resource-definition.yaml" %}}
327+
328+
Field selectors can then be used to get only resources with with a `color` of `blue`:
329+
330+
```shell
331+
kubectl get shirts.stable.example.com --field-selector spec.color=blue
332+
```
333+
334+
The output should be:
335+
336+
```
337+
NAME COLOR SIZE
338+
example1 blue S
339+
example2 blue M
340+
```
341+
298342
## {{% heading "whatsnext" %}}
299343

300344
* Learn how to [Extend the Kubernetes API with the aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).

content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md

Lines changed: 0 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -54,19 +54,6 @@ that plugin or [networking provider](/docs/concepts/cluster-administration/netwo
5454

5555
## Network Plugin Requirements
5656

57-
For plugin developers and users who regularly build or deploy Kubernetes, the plugin may also need
58-
specific configuration to support kube-proxy. The iptables proxy depends on iptables, and the
59-
plugin may need to ensure that container traffic is made available to iptables. For example, if
60-
the plugin connects containers to a Linux bridge, the plugin must set the
61-
`net/bridge/bridge-nf-call-iptables` sysctl to `1` to ensure that the iptables proxy functions
62-
correctly. If the plugin does not use a Linux bridge, but uses something like Open vSwitch or
63-
some other mechanism instead, it should ensure container traffic is appropriately routed for the
64-
proxy.
65-
66-
By default, if no kubelet network plugin is specified, the `noop` plugin is used, which sets
67-
`net/bridge/bridge-nf-call-iptables=1` to ensure simple configurations (like Docker with a bridge)
68-
work correctly with the iptables proxy.
69-
7057
### Loopback CNI
7158

7259
In addition to the CNI plugin installed on the nodes for implementing the Kubernetes network

0 commit comments

Comments
 (0)