Skip to content

Commit 969a3db

Browse files
authored
Merge pull request #26153 from kubernetes/dev-1.21
Official 1.21 Release Docs
2 parents 6d25262 + 73aec8e commit 969a3db

File tree

149 files changed

+61506
-2394
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

149 files changed

+61506
-2394
lines changed

config.toml

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -138,10 +138,10 @@ time_format_default = "January 02, 2006 at 3:04 PM PST"
138138
description = "Production-Grade Container Orchestration"
139139
showedit = true
140140

141-
latest = "v1.20"
141+
latest = "v1.21"
142142

143-
fullversion = "v1.20.0"
144-
version = "v1.20"
143+
fullversion = "v1.21.0"
144+
version = "v1.21"
145145
githubbranch = "master"
146146
docsbranch = "master"
147147
deprecated = false
@@ -178,40 +178,40 @@ js = [
178178
]
179179

180180
[[params.versions]]
181-
fullversion = "v1.20.0"
182-
version = "v1.20"
183-
githubbranch = "v1.20.0"
181+
fullversion = "v1.21.0"
182+
version = "v1.21"
183+
githubbranch = "v1.21.0"
184184
docsbranch = "master"
185185
url = "https://kubernetes.io"
186186

187187
[[params.versions]]
188-
fullversion = "v1.19.4"
188+
fullversion = "v1.20.5"
189+
version = "v1.20"
190+
githubbranch = "v1.20.5"
191+
docsbranch = "release-1.20"
192+
url = "https://v1-20.kubernetes.io"
193+
194+
[[params.versions]]
195+
fullversion = "v1.19.9"
189196
version = "v1.19"
190-
githubbranch = "v1.19.4"
197+
githubbranch = "v1.19.9"
191198
docsbranch = "release-1.19"
192199
url = "https://v1-19.docs.kubernetes.io"
193200

194201
[[params.versions]]
195-
fullversion = "v1.18.12"
202+
fullversion = "v1.18.17"
196203
version = "v1.18"
197-
githubbranch = "v1.18.12"
204+
githubbranch = "v1.18.17"
198205
docsbranch = "release-1.18"
199206
url = "https://v1-18.docs.kubernetes.io"
200207

201208
[[params.versions]]
202-
fullversion = "v1.17.14"
209+
fullversion = "v1.17.17"
203210
version = "v1.17"
204-
githubbranch = "v1.17.14"
211+
githubbranch = "v1.17.17"
205212
docsbranch = "release-1.17"
206213
url = "https://v1-17.docs.kubernetes.io"
207214

208-
[[params.versions]]
209-
fullversion = "v1.16.15"
210-
version = "v1.16"
211-
githubbranch = "v1.16.15"
212-
docsbranch = "release-1.16"
213-
url = "https://v1-16.docs.kubernetes.io"
214-
215215

216216
# User interface configuration
217217
[params.ui]

content/en/docs/concepts/architecture/cloud-controller.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -206,6 +206,8 @@ rules:
206206
[Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/#cloud-controller-manager)
207207
has instructions on running and managing the cloud controller manager.
208208
209+
To upgrade a HA control plane to use the cloud controller manager, see [Migrate Replicated Control Plane To Use Cloud Controller Manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/).
210+
209211
Want to know how to implement your own cloud controller manager, or extend an existing project?
210212
211213
The cloud controller manager uses Go interfaces to allow implementations from any cloud to be plugged in. Specifically, it uses the `CloudProvider` interface defined in [`cloud.go`](https://github.com/kubernetes/cloud-provider/blob/release-1.17/cloud.go#L42-L62) from [kubernetes/cloud-provider](https://github.com/kubernetes/cloud-provider).

content/en/docs/concepts/architecture/nodes.md

Lines changed: 25 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -346,26 +346,43 @@ the kubelet can use topology hints when making resource assignment decisions.
346346
See [Control Topology Management Policies on a Node](/docs/tasks/administer-cluster/topology-manager/)
347347
for more information.
348348

349-
## Graceful Node Shutdown {#graceful-node-shutdown}
349+
## Graceful node shutdown {#graceful-node-shutdown}
350350

351-
{{< feature-state state="alpha" for_k8s_version="v1.20" >}}
351+
{{< feature-state state="beta" for_k8s_version="v1.21" >}}
352+
353+
The kubelet attempts to detect node system shutdown and terminates pods running on the node.
352354

353-
If you have enabled the `GracefulNodeShutdown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/), then the kubelet attempts to detect the node system shutdown and terminates pods running on the node.
354355
Kubelet ensures that pods follow the normal [pod termination process](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) during the node shutdown.
355356

356-
When the `GracefulNodeShutdown` feature gate is enabled, kubelet uses [systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to delay the node shutdown with a given duration. During a shutdown, kubelet terminates pods in two phases:
357+
The Graceful node shutdown feature depends on systemd since it takes advantage of
358+
[systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to
359+
delay the node shutdown with a given duration.
360+
361+
Graceful node shutdown is controlled with the `GracefulNodeShutdown`
362+
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) which is
363+
enabled by default in 1.21.
364+
365+
Note that by default, both configuration options described below,
366+
`ShutdownGracePeriod` and `ShutdownGracePeriodCriticalPods` are set to zero,
367+
thus not activating Graceful node shutdown functionality.
368+
To activate the feature, the two kubelet config settings should be configured appropriately and set to non-zero values.
369+
370+
During a graceful shutdown, kubelet terminates pods in two phases:
357371

358372
1. Terminate regular pods running on the node.
359373
2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) running on the node.
360374

361-
Graceful Node Shutdown feature is configured with two [`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/) options:
375+
Graceful node shutdown feature is configured with two [`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/) options:
362376
* `ShutdownGracePeriod`:
363377
* Specifies the total duration that the node should delay the shutdown by. This is the total grace period for pod termination for both regular and [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
364378
* `ShutdownGracePeriodCriticalPods`:
365-
* Specifies the duration used to terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) during a node shutdown. This should be less than `ShutdownGracePeriod`.
366-
367-
For example, if `ShutdownGracePeriod=30s`, and `ShutdownGracePeriodCriticalPods=10s`, kubelet will delay the node shutdown by 30 seconds. During the shutdown, the first 20 (30-10) seconds would be reserved for gracefully terminating normal pods, and the last 10 seconds would be reserved for terminating [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
379+
* Specifies the duration used to terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) during a node shutdown. This value should be less than `ShutdownGracePeriod`.
368380

381+
For example, if `ShutdownGracePeriod=30s`, and
382+
`ShutdownGracePeriodCriticalPods=10s`, kubelet will delay the node shutdown by
383+
30 seconds. During the shutdown, the first 20 (30-10) seconds would be reserved
384+
for gracefully terminating normal pods, and the last 10 seconds would be
385+
reserved for terminating [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
369386

370387
## {{% heading "whatsnext" %}}
371388

content/en/docs/concepts/cluster-administration/logging.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,12 +83,15 @@ As an example, you can find detailed information about how `kube-up.sh` sets
8383
up logging for COS image on GCP in the corresponding
8484
[`configure-helper` script](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh).
8585

86+
When using a **CRI container runtime**, the kubelet is responsible for rotating the logs and managing the logging directory structure. The kubelet
87+
sends this information to the CRI container runtime and the runtime writes the container logs to the given location. The two kubelet flags `container-log-max-size` and `container-log-max-files` can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
88+
8689
When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) as in
8790
the basic logging example, the kubelet on the node handles the request and
8891
reads directly from the log file. The kubelet returns the content of the log file.
8992

9093
{{< note >}}
91-
If an external system has performed the rotation,
94+
If an external system has performed the rotation or a CRI container runtime is used,
9295
only the contents of the latest log file will be available through
9396
`kubectl logs`. For example, if there's a 10MB file, `logrotate` performs
9497
the rotation and there are two files: one file that is 10MB in size and a second file that is empty.

content/en/docs/concepts/cluster-administration/system-metrics.md

Lines changed: 19 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@ cloudprovider_gce_api_request_duration_seconds { request = "list_disk"}
134134

135135
### kube-scheduler metrics
136136

137-
{{< feature-state for_k8s_version="v1.20" state="alpha" >}}
137+
{{< feature-state for_k8s_version="v1.21" state="beta" >}}
138138

139139
The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current or historical scheduling limits, quickly identify workloads that cannot schedule due to lack of resources, and compare actual usage to the pod's request.
140140

@@ -152,6 +152,24 @@ Once a pod reaches completion (has a `restartPolicy` of `Never` or `OnFailure` a
152152
The metrics are exposed at the HTTP endpoint `/metrics/resources` and require the same authorization as the `/metrics`
153153
endpoint on the scheduler. You must use the `--show-hidden-metrics-for-version=1.20` flag to expose these alpha stability metrics.
154154

155+
## Disabling metrics
156+
157+
You can explicitly turn off metrics via command line flag `--disabled-metrics`. This may be desired if, for example, a metric is causing a performance problem. The input is a list of disabled metrics (i.e. `--disabled-metrics=metric1,metric2`).
158+
159+
## Metric cardinality enforcement
160+
161+
Metrics with unbounded dimensions could cause memory issues in the components they instrument. To limit resource use, you can use the `--allow-label-value` command line option to dynamically configure an allow-list of label values for a metric.
162+
163+
In alpha stage, the flag can only take in a series of mappings as metric label allow-list.
164+
Each mapping is of the format `<metric_name>,<label_name>=<allowed_labels>` where
165+
`<allowed_labels>` is a comma-separated list of acceptable label names.
166+
167+
The overall format looks like:
168+
`--allow-label-value <metric_name>,<label_name>='<allow_value1>, <allow_value2>...', <metric_name2>,<label_name>='<allow_value1>, <allow_value2>...', ...`.
169+
170+
Here is an example:
171+
`--allow-label-value number_count_metric,odd_number='1,3,5', number_count_metric,even_number='2,4,6', date_gauge_metric,weekend='Saturday,Sunday'`
172+
155173

156174
## {{% heading "whatsnext" %}}
157175

content/en/docs/concepts/configuration/configmap.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -236,9 +236,9 @@ ConfigMaps consumed as environment variables are not updated automatically and r
236236

237237
## Immutable ConfigMaps {#configmap-immutable}
238238

239-
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
239+
{{< feature-state for_k8s_version="v1.21" state="stable" >}}
240240

241-
The Kubernetes beta feature _Immutable Secrets and ConfigMaps_ provides an option to set
241+
The Kubernetes feature _Immutable Secrets and ConfigMaps_ provides an option to set
242242
individual Secrets and ConfigMaps as immutable. For clusters that extensively use ConfigMaps
243243
(at least tens of thousands of unique ConfigMap to Pod mounts), preventing changes to their
244244
data has the following advantages:

content/en/docs/concepts/configuration/secret.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -749,9 +749,9 @@ There are third party solutions for triggering restarts when secrets change.
749749

750750
## Immutable Secrets {#secret-immutable}
751751

752-
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
752+
{{< feature-state for_k8s_version="v1.21" state="stable" >}}
753753

754-
The Kubernetes beta feature _Immutable Secrets and ConfigMaps_ provides an option to set
754+
The Kubernetes feature _Immutable Secrets and ConfigMaps_ provides an option to set
755755
individual Secrets and ConfigMaps as immutable. For clusters that extensively use Secrets
756756
(at least tens of thousands of unique Secret to Pod mounts), preventing changes to their
757757
data has the following advantages:

content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md

Lines changed: 61 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -193,9 +193,69 @@ for these devices:
193193
// node resources consumed by pods and containers on the node
194194
service PodResourcesLister {
195195
rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {}
196+
rpc GetAllocatableResources(AllocatableResourcesRequest) returns (AllocatableResourcesResponse) {}
196197
}
197198
```
198199

200+
The `List` endpoint provides information on resources of running pods, with details such as the
201+
id of exclusively allocated CPUs, device id as it was reported by device plugins and id of
202+
the NUMA node where these devices are allocated.
203+
204+
```gRPC
205+
// ListPodResourcesResponse is the response returned by List function
206+
message ListPodResourcesResponse {
207+
repeated PodResources pod_resources = 1;
208+
}
209+
210+
// PodResources contains information about the node resources assigned to a pod
211+
message PodResources {
212+
string name = 1;
213+
string namespace = 2;
214+
repeated ContainerResources containers = 3;
215+
}
216+
217+
// ContainerResources contains information about the resources assigned to a container
218+
message ContainerResources {
219+
string name = 1;
220+
repeated ContainerDevices devices = 2;
221+
repeated int64 cpu_ids = 3;
222+
}
223+
224+
// Topology describes hardware topology of the resource
225+
message TopologyInfo {
226+
repeated NUMANode nodes = 1;
227+
}
228+
229+
// NUMA representation of NUMA node
230+
message NUMANode {
231+
int64 ID = 1;
232+
}
233+
234+
// ContainerDevices contains information about the devices assigned to a container
235+
message ContainerDevices {
236+
string resource_name = 1;
237+
repeated string device_ids = 2;
238+
TopologyInfo topology = 3;
239+
}
240+
```
241+
242+
GetAllocatableResources provides information on resources initially available on the worker node.
243+
It provides more information than kubelet exports to APIServer.
244+
245+
```gRPC
246+
// AllocatableResourcesResponses contains informations about all the devices known by the kubelet
247+
message AllocatableResourcesResponse {
248+
repeated ContainerDevices devices = 1;
249+
repeated int64 cpu_ids = 2;
250+
}
251+
252+
```
253+
254+
`ContainerDevices` do expose the topology information declaring to which NUMA cells the device is affine.
255+
The NUMA cells are identified using a opaque integer ID, which value is consistent to what device
256+
plugins report [when they register themselves to the kubelet](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#device-plugin-integration-with-the-topology-manager).
257+
258+
199259
The gRPC service is served over a unix socket at `/var/lib/kubelet/pod-resources/kubelet.sock`.
200260
Monitoring agents for device plugin resources can be deployed as a daemon, or as a DaemonSet.
201261
The canonical directory `/var/lib/kubelet/pod-resources` requires privileged access, so monitoring
@@ -204,7 +264,7 @@ DaemonSet, `/var/lib/kubelet/pod-resources` must be mounted as a
204264
{{< glossary_tooltip term_id="volume" >}} in the device monitoring agent's
205265
[PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core).
206266

207-
Support for the "PodResources service" requires `KubeletPodResources` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled.
267+
Support for the `PodResourcesLister service` requires `KubeletPodResources` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled.
208268
It is enabled by default starting with Kubernetes 1.15 and is v1 since Kubernetes 1.20.
209269

210270
## Device Plugin integration with the Topology Manager
@@ -256,5 +316,3 @@ Here are some examples of device plugin implementations:
256316
* Learn about [advertising extended resources](/docs/tasks/administer-cluster/extended-resource-node/) on a node
257317
* Read about using [hardware acceleration for TLS ingress](https://kubernetes.io/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/) with Kubernetes
258318
* Learn about the [Topology Manager](/docs/tasks/administer-cluster/topology-manager/)
259-
260-

content/en/docs/concepts/overview/working-with-objects/namespaces.md

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Namespaces are a way to divide cluster resources between multiple users (via [re
3030

3131
It is not necessary to use multiple namespaces to separate slightly different
3232
resources, such as different versions of the same software: use
33-
[labels](/docs/concepts/overview/working-with-objects/labels) to distinguish
33+
{{< glossary_tooltip text="labels" term_id="label" >}} to distinguish
3434
resources within the same namespace.
3535

3636
## Working with Namespaces
@@ -114,6 +114,16 @@ kubectl api-resources --namespaced=true
114114
kubectl api-resources --namespaced=false
115115
```
116116

117+
## Automatic labelling
118+
119+
{{< feature-state state="beta" for_k8s_version="1.21" >}}
120+
121+
The Kubernetes control plane sets an immutable {{< glossary_tooltip text="label" term_id="label" >}}
122+
`kubernetes.io/metadata.name` on all namespaces, provided that the `NamespaceDefaultLabelName`
123+
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled.
124+
The value of the label is the namespace name.
125+
126+
117127
## {{% heading "whatsnext" %}}
118128

119129
* Learn more about [creating a new namespace](/docs/tasks/administer-cluster/namespaces/#creating-a-new-namespace).
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
---
2+
reviewers:
3+
- derekwaynecarr
4+
- klueska
5+
title: Node Resource Managers
6+
content_type: concept
7+
weight: 50
8+
---
9+
10+
<!-- overview -->
11+
12+
In order to support latency-critical and high-throughput workloads, Kubernetes offers a suite of Resource Managers. The managers aim to co-ordinate and optimise node's resources alignment for pods configured with a specific requirement for CPUs, devices, and memory (hugepages) resources.
13+
14+
<!-- body -->
15+
16+
The main manager, the Topology Manager, is a Kubelet component that co-ordinates the overall resource management process through its [policy](/docs/tasks/administer-cluster/topology-manager/).
17+
18+
The configuration of individual managers is elaborated in dedicated documents:
19+
20+
- [CPU Manager Policies](/docs/tasks/administer-cluster/cpu-management-policies/)
21+
- [Device Manager](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#device-plugin-integration-with-the-topology-manager)
22+
- [Memory Manger Policies](/docs/tasks/administer-cluster/memory-manager/)

0 commit comments

Comments
 (0)