Skip to content

Commit 391eef6

Browse files
author
Anna
authored
Merge pull request #24138 from kubernetes/dev-1.20
Official 1.20 Release Docs
2 parents b905af1 + 44a3070 commit 391eef6

File tree

97 files changed

+54312
-1484
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

97 files changed

+54312
-1484
lines changed

config.toml

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -138,10 +138,10 @@ time_format_default = "January 02, 2006 at 3:04 PM PST"
138138
description = "Production-Grade Container Orchestration"
139139
showedit = true
140140

141-
latest = "v1.19"
141+
latest = "v1.20"
142142

143-
fullversion = "v1.19.0"
144-
version = "v1.19"
143+
fullversion = "v1.20.0"
144+
version = "v1.20"
145145
githubbranch = "master"
146146
docsbranch = "master"
147147
deprecated = false
@@ -183,40 +183,40 @@ js = [
183183
]
184184

185185
[[params.versions]]
186-
fullversion = "v1.19.0"
187-
version = "v1.19"
188-
githubbranch = "v1.19.0"
186+
fullversion = "v1.20.0"
187+
version = "v1.20"
188+
githubbranch = "v1.20.0"
189189
docsbranch = "master"
190190
url = "https://kubernetes.io"
191191

192192
[[params.versions]]
193-
fullversion = "v1.18.8"
193+
fullversion = "v1.19.4"
194+
version = "v1.19"
195+
githubbranch = "v1.19.4"
196+
docsbranch = "release-1.19"
197+
url = "https://v1-19.docs.kubernetes.io"
198+
199+
[[params.versions]]
200+
fullversion = "v1.18.12"
194201
version = "v1.18"
195-
githubbranch = "v1.18.8"
202+
githubbranch = "v1.18.12"
196203
docsbranch = "release-1.18"
197204
url = "https://v1-18.docs.kubernetes.io"
198205

199206
[[params.versions]]
200-
fullversion = "v1.17.11"
207+
fullversion = "v1.17.14"
201208
version = "v1.17"
202-
githubbranch = "v1.17.11"
209+
githubbranch = "v1.17.14"
203210
docsbranch = "release-1.17"
204211
url = "https://v1-17.docs.kubernetes.io"
205212

206213
[[params.versions]]
207-
fullversion = "v1.16.14"
214+
fullversion = "v1.16.15"
208215
version = "v1.16"
209-
githubbranch = "v1.16.14"
216+
githubbranch = "v1.16.15"
210217
docsbranch = "release-1.16"
211218
url = "https://v1-16.docs.kubernetes.io"
212219

213-
[[params.versions]]
214-
fullversion = "v1.15.12"
215-
version = "v1.15"
216-
githubbranch = "v1.15.12"
217-
docsbranch = "release-1.15"
218-
url = "https://v1-15.docs.kubernetes.io"
219-
220220

221221
# User interface configuration
222222
[params.ui]

content/en/docs/concepts/architecture/nodes.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -330,6 +330,26 @@ the kubelet can use topology hints when making resource assignment decisions.
330330
See [Control Topology Management Policies on a Node](/docs/tasks/administer-cluster/topology-manager/)
331331
for more information.
332332

333+
## Graceful Node Shutdown {#graceful-node-shutdown}
334+
335+
{{< feature-state state="alpha" for_k8s_version="v1.20" >}}
336+
337+
If you have enabled the `GracefulNodeShutdown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/), then the kubelet attempts to detect the node system shutdown and terminates pods running on the node.
338+
Kubelet ensures that pods follow the normal [pod termination process](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) during the node shutdown.
339+
340+
When the `GracefulNodeShutdown` feature gate is enabled, kubelet uses [systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to delay the node shutdown with a given duration. During a shutdown kubelet terminates pods in two phases:
341+
342+
1. Terminate regular pods running on the node.
343+
2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) running on the node.
344+
345+
Graceful Node Shutdown feature is configured with two [`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/) options:
346+
* `ShutdownGracePeriod`:
347+
* Specifies the total duration that the node should delay the shutdown by. This is the total grace period for pod termination for both regular and [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
348+
* `ShutdownGracePeriodCriticalPods`:
349+
* Specifies the duration used to terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) during a node shutdown. This should be less than `ShutdownGracePeriod`.
350+
351+
For example, if `ShutdownGracePeriod=30s`, and `ShutdownGracePeriodCriticalPods=10s`, kubelet will delay the node shutdown by 30 seconds. During the shutdown, the first 20 (30-10) seconds would be reserved for gracefully terminating normal pods, and the last 10 seconds would be reserved for terminating [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
352+
333353

334354
## {{% heading "whatsnext" %}}
335355

content/en/docs/concepts/cluster-administration/flow-control.md

Lines changed: 44 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ min-kubernetes-server-version: v1.18
66

77
<!-- overview -->
88

9-
{{< feature-state state="alpha" for_k8s_version="v1.18" >}}
9+
{{< feature-state state="beta" for_k8s_version="v1.20" >}}
1010

1111
Controlling the behavior of the Kubernetes API server in an overload situation
1212
is a key task for cluster administrators. The {{< glossary_tooltip
@@ -37,25 +37,30 @@ Fairness feature enabled.
3737

3838
<!-- body -->
3939

40-
## Enabling API Priority and Fairness
40+
## Enabling/Disabling API Priority and Fairness
4141

4242
The API Priority and Fairness feature is controlled by a feature gate
43-
and is not enabled by default. See
43+
and is enabled by default. See
4444
[Feature Gates](/docs/reference/command-line-tools-reference/feature-gates/)
45-
for a general explanation of feature gates and how to enable and disable them. The
46-
name of the feature gate for APF is "APIPriorityAndFairness". This
47-
feature also involves an {{< glossary_tooltip term_id="api-group"
48-
text="API Group" >}} that must be enabled. You can do these
49-
things by adding the following command-line flags to your
50-
`kube-apiserver` invocation:
45+
for a general explanation of feature gates and how to enable and
46+
disable them. The name of the feature gate for APF is
47+
"APIPriorityAndFairness". This feature also involves an {{<
48+
glossary_tooltip term_id="api-group" text="API Group" >}} with: (a) a
49+
`v1alpha1` version, disabled by default, and (b) a `v1beta1`
50+
version, enabled by default. You can disable the feature
51+
gate and API group v1beta1 version by adding the following
52+
command-line flags to your `kube-apiserver` invocation:
5153

5254
```shell
5355
kube-apiserver \
54-
--feature-gates=APIPriorityAndFairness=true \
55-
--runtime-config=flowcontrol.apiserver.k8s.io/v1alpha1=true \
56+
--feature-gates=APIPriorityAndFairness=false \
57+
--runtime-config=flowcontrol.apiserver.k8s.io/v1beta1=false \
5658
# …and other flags as usual
5759
```
5860

61+
Alternatively, you can enable the v1alpha1 version of the API group
62+
with `--runtime-config=flowcontrol.apiserver.k8s.io/v1beta1=true`.
63+
5964
The command-line flag `--enable-priority-and-fairness=false` will disable the
6065
API Priority and Fairness feature, even if other flags have enabled it.
6166

@@ -189,12 +194,14 @@ that originate from outside your cluster.
189194

190195
## Resources
191196
The flow control API involves two kinds of resources.
192-
[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1alpha1-flowcontrol-apiserver-k8s-io)
197+
[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1beta1-flowcontrol-apiserver-k8s-io)
193198
define the available isolation classes, the share of the available concurrency
194199
budget that each can handle, and allow for fine-tuning queuing behavior.
195-
[FlowSchemas](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#flowschema-v1alpha1-flowcontrol-apiserver-k8s-io)
196-
are used to classify individual inbound requests, matching each to a single
197-
PriorityLevelConfiguration.
200+
[FlowSchemas](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#flowschema-v1beta1-flowcontrol-apiserver-k8s-io)
201+
are used to classify individual inbound requests, matching each to a
202+
single PriorityLevelConfiguration. There is also a `v1alpha1` version
203+
of the same API group, and it has the same Kinds with the same syntax and
204+
semantics.
198205

199206
### PriorityLevelConfiguration
200207
A PriorityLevelConfiguration represents a single isolation class. Each
@@ -331,15 +338,22 @@ PriorityLevelConfigurations.
331338

332339
### Metrics
333340

341+
{{< note >}}
342+
In versions of Kubernetes before v1.20, the labels `flow_schema` and
343+
`priority_level` were inconsistently named `flowSchema` and `priorityLevel`,
344+
respectively. If you're running Kubernetes versions v1.19 and earlier, you
345+
should refer to the documentation for your version.
346+
{{< /note >}}
347+
334348
When you enable the API Priority and Fairness feature, the kube-apiserver
335349
exports additional metrics. Monitoring these can help you determine whether your
336350
configuration is inappropriately throttling important traffic, or find
337351
poorly-behaved workloads that may be harming system health.
338352

339353
* `apiserver_flowcontrol_rejected_requests_total` is a counter vector
340354
(cumulative since server start) of requests that were rejected,
341-
broken down by the labels `flowSchema` (indicating the one that
342-
matched the request), `priorityLevel` (indicating the one to which
355+
broken down by the labels `flow_schema` (indicating the one that
356+
matched the request), `priority_level` (indicating the one to which
343357
the request was assigned), and `reason`. The `reason` label will be
344358
have one of the following values:
345359
* `queue-full`, indicating that too many requests were already
@@ -352,8 +366,8 @@ poorly-behaved workloads that may be harming system health.
352366

353367
* `apiserver_flowcontrol_dispatched_requests_total` is a counter
354368
vector (cumulative since server start) of requests that began
355-
executing, broken down by the labels `flowSchema` (indicating the
356-
one that matched the request) and `priorityLevel` (indicating the
369+
executing, broken down by the labels `flow_schema` (indicating the
370+
one that matched the request) and `priority_level` (indicating the
357371
one to which the request was assigned).
358372

359373
* `apiserver_current_inqueue_requests` is a gauge vector of recent
@@ -384,25 +398,25 @@ poorly-behaved workloads that may be harming system health.
384398

385399
* `apiserver_flowcontrol_current_inqueue_requests` is a gauge vector
386400
holding the instantaneous number of queued (not executing) requests,
387-
broken down by the labels `priorityLevel` and `flowSchema`.
401+
broken down by the labels `priority_level` and `flow_schema`.
388402

389403
* `apiserver_flowcontrol_current_executing_requests` is a gauge vector
390404
holding the instantaneous number of executing (not waiting in a
391-
queue) requests, broken down by the labels `priorityLevel` and
392-
`flowSchema`.
405+
queue) requests, broken down by the labels `priority_level` and
406+
`flow_schema`.
393407

394408
* `apiserver_flowcontrol_priority_level_request_count_samples` is a
395409
histogram vector of observations of the then-current number of
396410
requests broken down by the labels `phase` (which takes on the
397-
values `waiting` and `executing`) and `priorityLevel`. Each
411+
values `waiting` and `executing`) and `priority_level`. Each
398412
histogram gets observations taken periodically, up through the last
399413
activity of the relevant sort. The observations are made at a high
400414
rate.
401415

402416
* `apiserver_flowcontrol_priority_level_request_count_watermarks` is a
403417
histogram vector of high or low water marks of the number of
404418
requests broken down by the labels `phase` (which takes on the
405-
values `waiting` and `executing`) and `priorityLevel`; the label
419+
values `waiting` and `executing`) and `priority_level`; the label
406420
`mark` takes on values `high` and `low`. The water marks are
407421
accumulated over windows bounded by the times when an observation
408422
was added to
@@ -411,7 +425,7 @@ poorly-behaved workloads that may be harming system health.
411425

412426
* `apiserver_flowcontrol_request_queue_length_after_enqueue` is a
413427
histogram vector of queue lengths for the queues, broken down by
414-
the labels `priorityLevel` and `flowSchema`, as sampled by the
428+
the labels `priority_level` and `flow_schema`, as sampled by the
415429
enqueued requests. Each request that gets queued contributes one
416430
sample to its histogram, reporting the length of the queue just
417431
after the request was added. Note that this produces different
@@ -428,12 +442,12 @@ poorly-behaved workloads that may be harming system health.
428442
* `apiserver_flowcontrol_request_concurrency_limit` is a gauge vector
429443
holding the computed concurrency limit (based on the API server's
430444
total concurrency limit and PriorityLevelConfigurations' concurrency
431-
shares), broken down by the label `priorityLevel`.
445+
shares), broken down by the label `priority_level`.
432446

433447
* `apiserver_flowcontrol_request_wait_duration_seconds` is a histogram
434448
vector of how long requests spent queued, broken down by the labels
435-
`flowSchema` (indicating which one matched the request),
436-
`priorityLevel` (indicating the one to which the request was
449+
`flow_schema` (indicating which one matched the request),
450+
`priority_level` (indicating the one to which the request was
437451
assigned), and `execute` (indicating whether the request started
438452
executing).
439453
{{< note >}}
@@ -445,8 +459,8 @@ poorly-behaved workloads that may be harming system health.
445459

446460
* `apiserver_flowcontrol_request_execution_seconds` is a histogram
447461
vector of how long requests took to actually execute, broken down by
448-
the labels `flowSchema` (indicating which one matched the request)
449-
and `priorityLevel` (indicating the one to which the request was
462+
the labels `flow_schema` (indicating which one matched the request)
463+
and `priority_level` (indicating the one to which the request was
450464
assigned).
451465

452466
### Debug endpoints
@@ -515,4 +529,3 @@ For background information on design details for API priority and fairness, see
515529
the [enhancement proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20190228-priority-and-fairness.md).
516530
You can make suggestions and feature requests via [SIG API
517531
Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery).
518-

content/en/docs/concepts/cluster-administration/system-logs.md

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -91,6 +91,27 @@ List of components currently supporting JSON format:
9191
* {{< glossary_tooltip term_id="kube-scheduler" text="kube-scheduler" >}}
9292
* {{< glossary_tooltip term_id="kubelet" text="kubelet" >}}
9393

94+
### Log sanitization
95+
96+
{{< feature-state for_k8s_version="v1.20" state="alpha" >}}
97+
98+
{{<warning >}}
99+
Log sanitization might incur significant computation overhead and therefore should not be enabled in production.
100+
{{< /warning >}}
101+
102+
The `--experimental-logging-sanitization` flag enables the klog sanitization filter.
103+
If enabled all log arguments are inspected for fields tagged as sensitive data (e.g. passwords, keys, tokens) and logging of these fields will be prevented.
104+
105+
List of components currently supporting log sanitization:
106+
* kube-controller-manager
107+
* kube-apiserver
108+
* kube-scheduler
109+
* kubelet
110+
111+
{{< note >}}
112+
The Log sanitization filter does not prevent user workload logs from leaking sensitive data.
113+
{{< /note >}}
114+
94115
### Log verbosity level
95116

96117
The `-v` flag controls log verbosity. Increasing the value increases the number of logged events. Decreasing the value decreases the number of logged events.

content/en/docs/concepts/cluster-administration/system-metrics.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -129,6 +129,28 @@ cloudprovider_gce_api_request_duration_seconds { request = "detach_disk"}
129129
cloudprovider_gce_api_request_duration_seconds { request = "list_disk"}
130130
```
131131

132+
133+
### kube-scheduler metrics
134+
135+
{{< feature-state for_k8s_version="v1.20" state="alpha" >}}
136+
137+
The scheduler exposes optional metrics that reports the requested resources and the desired limits of all running pods. These metrics can be used to build capacity planning dashboards, assess current or historical scheduling limits, quickly identify workloads that cannot schedule due to lack of resources, and compare actual usage to the pod's request.
138+
139+
The kube-scheduler identifies the resource [requests and limits](/docs/concepts/configuration/manage-resources-containers/) configured for each Pod; when either a request or limit is non-zero, the kube-scheduler reports a metrics timeseries. The time series is labelled by:
140+
- namespace
141+
- pod name
142+
- the node where the pod is scheduled or an empty string if not yet scheduled
143+
- priority
144+
- the assigned scheduler for that pod
145+
- the name of the resource (for example, `cpu`)
146+
- the unit of the resource if known (for example, `cores`)
147+
148+
Once a pod reaches completion (has a `restartPolicy` of `Never` or `OnFailure` and is in the `Succeeded` or `Failed` pod phase, or has been deleted and all containers have a terminated state) the series is no longer reported since the scheduler is now free to schedule other pods to run. The two metrics are called `kube_pod_resource_request` and `kube_pod_resource_limit`.
149+
150+
The metrics are exposed at the HTTP endpoint `/metrics/resources` and require the same authorization as the `/metrics`
151+
endpoint on the scheduler. You must use the `--show-hidden-metrics-for-version=1.20` flag to expose these alpha stability metrics.
152+
153+
132154
## {{% heading "whatsnext" %}}
133155

134156
* Read about the [Prometheus text format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format) for metrics

content/en/docs/concepts/configuration/manage-resources-containers.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -600,6 +600,10 @@ spec:
600600
example.com/foo: 1
601601
```
602602

603+
## PID limiting
604+
605+
Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See [Pid Limiting](/docs/concepts/policy/pid-limiting/) for information.
606+
603607
## Troubleshooting
604608

605609
### My Pods are pending with event message failedScheduling

content/en/docs/concepts/containers/runtime-class.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ weight: 20
99

1010
<!-- overview -->
1111

12-
{{< feature-state for_k8s_version="v1.14" state="beta" >}}
12+
{{< feature-state for_k8s_version="v1.20" state="stable" >}}
1313

1414
This page describes the RuntimeClass resource and runtime selection mechanism.
1515

@@ -66,7 +66,7 @@ The RuntimeClass resource currently only has 2 significant fields: the RuntimeCl
6666
(`metadata.name`) and the handler (`handler`). The object definition looks like this:
6767

6868
```yaml
69-
apiVersion: node.k8s.io/v1beta1 # RuntimeClass is defined in the node.k8s.io API group
69+
apiVersion: node.k8s.io/v1 # RuntimeClass is defined in the node.k8s.io API group
7070
kind: RuntimeClass
7171
metadata:
7272
name: myclass # The name the RuntimeClass will be referenced by
@@ -186,4 +186,3 @@ are accounted for in Kubernetes.
186186
- Read about the [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/) concept
187187
- [PodOverhead Feature Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
188188
189-

content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -204,7 +204,8 @@ DaemonSet, `/var/lib/kubelet/pod-resources` must be mounted as a
204204
{{< glossary_tooltip term_id="volume" >}} in the plugin's
205205
[PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core).
206206

207-
Support for the "PodResources service" requires `KubeletPodResources` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled. It is enabled by default starting with Kubernetes 1.15.
207+
Support for the "PodResources service" requires `KubeletPodResources` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled.
208+
It is enabled by default starting with Kubernetes 1.15 and is v1 since Kubernetes 1.20.
208209

209210
## Device Plugin integration with the Topology Manager
210211

0 commit comments

Comments
 (0)