Skip to content

Commit 3a30941

Browse files
authored
Merge pull request #30760 from jlbutler/merged-main-dev-1.23
Merged main dev 1.23
2 parents 3d07786 + 584421f commit 3a30941

File tree

152 files changed

+1691
-942
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

152 files changed

+1691
-942
lines changed

assets/scss/_custom.scss

Lines changed: 22 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -601,7 +601,7 @@ body.td-documentation {
601601
#announcement {
602602
> * {
603603
color: inherit;
604-
background: inherit;
604+
background: transparent;
605605
}
606606

607607
a {
@@ -765,3 +765,24 @@ dl {
765765
margin-top: 1.5em;
766766
}
767767
}
768+
769+
.release-details {
770+
padding-left: 2em;
771+
772+
> :not(p) {
773+
font-size: 1.125em;
774+
}
775+
776+
.release-inline-heading, .release-inline-value {
777+
display: inline-block
778+
}
779+
780+
.release-inline-value {
781+
padding-left: 0.25em;
782+
}
783+
784+
p {
785+
margin-top: 1em;
786+
margin-bottom: 1em;
787+
}
788+
}

content/en/_index.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -43,12 +43,12 @@ <h2>The Challenges of Migrating 150+ Microservices to Kubernetes</h2>
4343
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Watch Video</button>
4444
<br>
4545
<br>
46-
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna21" button id="desktopKCButton">Attend KubeCon North America on October 11-15, 2021</a>
46+
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2022/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">Attend KubeCon Europe on May 17-20, 2022</a>
4747
<br>
4848
<br>
4949
<br>
5050
<br>
51-
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2022/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">Attend KubeCon Europe on May 17-20, 2022</a>
51+
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna21" button id="desktopKCButton">Attend KubeCon North America on October 24-28, 2022</a>
5252
</div>
5353
<div id="videoPlayer">
5454
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>

content/en/blog/_posts/2021-09-13-read-write-once-pod-access-mode-alpha.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ metadata:
2828
name: shared-cache
2929
spec:
3030
accessModes:
31-
- ReadWriteMany # Allow many pods to access shared-cache simultaneously.
31+
- ReadWriteMany # Allow many nodes to access shared-cache simultaneously.
3232
resources:
3333
requests:
3434
storage: 1Gi

content/en/blog/_posts/2021-12-01-kubernetes-1.22-release-interview.md

Lines changed: 343 additions & 0 deletions
Large diffs are not rendered by default.

content/en/docs/concepts/architecture/nodes.md

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -402,7 +402,7 @@ Graceful node shutdown is controlled with the `GracefulNodeShutdown`
402402
enabled by default in 1.21.
403403

404404
Note that by default, both configuration options described below,
405-
`ShutdownGracePeriod` and `ShutdownGracePeriodCriticalPods` are set to zero,
405+
`shutdownGracePeriod` and `shutdownGracePeriodCriticalPods` are set to zero,
406406
thus not activating Graceful node shutdown functionality.
407407
To activate the feature, the two kubelet config settings should be configured appropriately and set to non-zero values.
408408

@@ -412,13 +412,13 @@ During a graceful shutdown, kubelet terminates pods in two phases:
412412
2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) running on the node.
413413

414414
Graceful node shutdown feature is configured with two [`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/) options:
415-
* `ShutdownGracePeriod`:
415+
* `shutdownGracePeriod`:
416416
* Specifies the total duration that the node should delay the shutdown by. This is the total grace period for pod termination for both regular and [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
417-
* `ShutdownGracePeriodCriticalPods`:
418-
* Specifies the duration used to terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) during a node shutdown. This value should be less than `ShutdownGracePeriod`.
417+
* `shutdownGracePeriodCriticalPods`:
418+
* Specifies the duration used to terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) during a node shutdown. This value should be less than `shutdownGracePeriod`.
419419

420-
For example, if `ShutdownGracePeriod=30s`, and
421-
`ShutdownGracePeriodCriticalPods=10s`, kubelet will delay the node shutdown by
420+
For example, if `shutdownGracePeriod=30s`, and
421+
`shutdownGracePeriodCriticalPods=10s`, kubelet will delay the node shutdown by
422422
30 seconds. During the shutdown, the first 20 (30-10) seconds would be reserved
423423
for gracefully terminating normal pods, and the last 10 seconds would be
424424
reserved for terminating [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
@@ -535,6 +535,11 @@ the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn`
535535
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
536536
must be set to false.
537537

538+
{{< warning >}}
539+
When the memory swap feature is turned on, Kubernetes data such as the content
540+
of Secret objects that were written to tmpfs now could be swapped to disk.
541+
{{< /warning >}}
542+
538543
A user can also optionally configure `memorySwap.swapBehavior` in order to
539544
specify how a node will use swap memory. For example,
540545

content/en/docs/concepts/cluster-administration/addons.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,11 @@ This page lists some of the available add-ons and links to their respective inst
4545
## Infrastructure
4646

4747
* [KubeVirt](https://kubevirt.io/user-guide/#/installation/installation) is an add-on to run virtual machines on Kubernetes. Usually run on bare-metal clusters.
48+
* The
49+
[node problem detector](https://github.com/kubernetes/node-problem-detector)
50+
runs on Linux nodes and reports system issues as either
51+
[Events](/docs/reference/kubernetes-api/cluster-resources/event-v1/) or
52+
[Node conditions](/docs/concepts/architecture/nodes/#condition).
4853

4954
## Legacy Add-ons
5055

content/en/docs/concepts/cluster-administration/flow-control.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,10 @@ fair queuing technique so that, for example, a poorly-behaved
2626
{{< glossary_tooltip text="controller" term_id="controller" >}} need not
2727
starve others (even at the same priority level).
2828

29+
This feature is designed to work well with standard controllers, which
30+
use informers and react to failures of API requests with exponential
31+
back-off, and other clients that also work this way.
32+
2933
{{< caution >}}
3034
Requests classified as "long-running" — primarily watches — are not
3135
subject to the API Priority and Fairness filter. This is also true for
@@ -102,6 +106,8 @@ name of the matching FlowSchema plus a _flow distinguisher_ — which
102106
is either the requesting user, the target resource's namespace, or nothing — and the
103107
system attempts to give approximately equal weight to requests in different
104108
flows of the same priority level.
109+
To enable distinct handling of distinct instances, controllers that have
110+
many instances should authenticate with distinct usernames
105111

106112
After classifying a request into a flow, the API Priority and Fairness
107113
feature then may assign the request to a queue. This assignment uses

content/en/docs/concepts/cluster-administration/networking.md

Lines changed: 0 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -169,49 +169,6 @@ With this toolset DANM is able to provide multiple separated network interfaces,
169169
network that satisfies the Kubernetes requirements. Many
170170
people have reported success with Flannel and Kubernetes.
171171

172-
### Google Compute Engine (GCE)
173-
174-
For the Google Compute Engine cluster configuration scripts, [advanced
175-
routing](https://cloud.google.com/vpc/docs/routes) is used to
176-
assign each VM a subnet (default is `/24` - 254 IPs). Any traffic bound for that
177-
subnet will be routed directly to the VM by the GCE network fabric. This is in
178-
addition to the "main" IP address assigned to the VM, which is NAT'ed for
179-
outbound internet access. A linux bridge (called `cbr0`) is configured to exist
180-
on that subnet, and is passed to docker's `--bridge` flag.
181-
182-
Docker is started with:
183-
184-
```shell
185-
DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false"
186-
```
187-
188-
This bridge is created by Kubelet (controlled by the `--network-plugin=kubenet`
189-
flag) according to the `Node`'s `.spec.podCIDR`.
190-
191-
Docker will now allocate IPs from the `cbr-cidr` block. Containers can reach
192-
each other and `Nodes` over the `cbr0` bridge. Those IPs are all routable
193-
within the GCE project network.
194-
195-
GCE itself does not know anything about these IPs, though, so it will not NAT
196-
them for outbound internet traffic. To achieve that an iptables rule is used
197-
to masquerade (aka SNAT - to make it seem as if packets came from the `Node`
198-
itself) traffic that is bound for IPs outside the GCE project network
199-
(10.0.0.0/8).
200-
201-
```shell
202-
iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o eth0 -j MASQUERADE
203-
```
204-
205-
Lastly IP forwarding is enabled in the kernel (so the kernel will process
206-
packets for bridged containers):
207-
208-
```shell
209-
sysctl net.ipv4.ip_forward=1
210-
```
211-
212-
The result of all this is that all `Pods` can reach each other and can egress
213-
traffic to the internet.
214-
215172
### Jaguar
216173

217174
[Jaguar](https://gitlab.com/sdnlab/jaguar) is an open source solution for Kubernetes's network based on OpenDaylight. Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod.

content/en/docs/concepts/configuration/manage-resources-containers.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -116,11 +116,11 @@ CPU is always requested as an absolute quantity, never as a relative quantity;
116116

117117
Limits and requests for `memory` are measured in bytes. You can express memory as
118118
a plain integer or as a fixed-point number using one of these suffixes:
119-
E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
119+
E, P, T, G, M, k, m (millis). You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
120120
Mi, Ki. For example, the following represent roughly the same value:
121121

122122
```shell
123-
128974848, 129e6, 129M, 123Mi
123+
128974848, 129e6, 129M, 128974848000m, 123Mi
124124
```
125125

126126
Here's an example.

content/en/docs/concepts/services-networking/ingress-controllers.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -57,12 +57,11 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
5757

5858
## Using multiple Ingress controllers
5959

60-
You may deploy [any number of ingress controllers](https://git.k8s.io/ingress-nginx/docs/user-guide/multiple-ingress.md#multiple-ingress-controllers)
61-
within a cluster. When you create an ingress, you should annotate each ingress with the appropriate
62-
[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster)
63-
to indicate which ingress controller should be used if more than one exists within your cluster.
60+
You may deploy any number of ingress controllers using [ingress class](/docs/concepts/services-networking/ingress/#ingress-class)
61+
within a cluster. Note the `.metadata.name` of your ingress class resource. When you create an ingress you would need that name to specify the `ingressClassName` field on your Ingress object (refer to [IngressSpec v1 reference](/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec). `ingressClassName` is a replacement of the older [annotation method](/docs/concepts/services-networking/ingress/#deprecated-annotation).
6462

65-
If you do not define a class, your cloud provider may use a default ingress controller.
63+
If you do not specify an IngressClass for an Ingress, and your cluster has exactly one IngressClass marked as default, then Kubernetes [applies](/docs/concepts/services-networking/ingress/#default-ingress-class) the cluster's default IngressClass to the Ingress.
64+
You mark an IngressClass as default by setting the [`ingressclass.kubernetes.io/is-default-class` annotation](/docs/reference/labels-annotations-taints/#ingressclass-kubernetes-io-is-default-class) on that IngressClass, with the string value `"true"`.
6665

6766
Ideally, all ingress controllers should fulfill this specification, but the various ingress
6867
controllers operate slightly differently.

0 commit comments

Comments
 (0)