Skip to content

Commit cf42bdb

Browse files
authored
Merge pull request #33230 from nate-double-u/merged-main-dev-1.24
Merged main into dev-1.24
2 parents 95859dd + 712f45d commit cf42bdb

File tree

129 files changed

+4540
-1978
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

129 files changed

+4540
-1978
lines changed

content/en/blog/_posts/2021-04-19-introducing-indexed-jobs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ spec:
6262
Note that completion mode is an alpha feature in the 1.21 release. To be able to
6363
use it in your cluster, make sure to enable the `IndexedJob` [feature
6464
gate](/docs/reference/command-line-tools-reference/feature-gates/) on the
65-
[API server](docs/reference/command-line-tools-reference/kube-apiserver/) and
65+
[API server](/docs/reference/command-line-tools-reference/kube-apiserver/) and
6666
the [controller manager](/docs/reference/command-line-tools-reference/kube-controller-manager/).
6767

6868
When you run the example, you will see that each of the three created Pods gets a

content/en/blog/_posts/2021-12-16-StatefulSet-PVC-Auto-Deletion.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,8 @@ slug: kubernetes-1-23-statefulset-pvc-auto-deletion
88
**Author:** Matthew Cary (Google)
99

1010
Kubernetes v1.23 introduced a new, alpha-level policy for
11-
[StatefulSets](docs/concepts/workloads/controllers/statefulset/) that controls the lifetime of
12-
[PersistentVolumeClaims](docs/concepts/storage/persistent-volumes/) (PVCs) generated from the
11+
[StatefulSets](/docs/concepts/workloads/controllers/statefulset/) that controls the lifetime of
12+
[PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/) (PVCs) generated from the
1313
StatefulSet spec template for cases when they should be deleted automatically when the StatefulSet
1414
is deleted or pods in the StatefulSet are scaled down.
1515

@@ -82,7 +82,7 @@ This policy forms a matrix with four cases. I’ll walk through and give an exam
8282
new replicas will automatically use them.
8383

8484
Visit the
85-
[documentation](docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-policies) to
85+
[documentation](/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-policies) to
8686
see all the details.
8787

8888
## What’s next?

content/en/blog/_posts/2022-05-03-dockershim-historical-context.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ The [Container Runtime Interface](/blog/2016/12/container-runtime-interface-cri-
1818

1919
However, this little software shim was never intended to be a permanent solution. Over the course of years, its existence has introduced a lot of unnecessary complexity to the kubelet itself. Some integrations are inconsistently implemented for Docker because of this shim, resulting in an increased burden on maintainers, and maintaining vendor-specific code is not in line with our open source philosophy. To reduce this maintenance burden and move towards a more collaborative community in support of open standards, [KEP-2221 was introduced](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim), proposing the removal of the dockershim. With the release of Kubernetes v1.20, the deprecation was official.
2020

21-
We didn’t do a great job communicating this, and unfortunately, the deprecation announcement led to some panic within the community. Confusion around what this meant for Docker as a company, if container images built by Docker would still run, and what Docker Engine actually is led to a conflagration on social media. This was our fault; we should have more clearly communicated what was happening and why at the time. To combat this, we released [a blog](/blog/2020/12/02/dont-panic-kubernetes-and-docker/) and [accompanying FAQ](/blog/2020/12/02/dockershim-faq/) to allay the community’s fears and correct some misconceptions about what Docker is and how containers work within Kubernetes. As a result of the community’s concerns, Docker and Mirantis jointly agreed to continue supporting the dockershim code in the form of [cri-dockerd](https://www.mirantis.com/blog/the-future-of-dockershim-is-cri-dockerd/), allowing you to continue using Docker Engine as your container runtime if need be. For the interest of users who want to try other runtimes, like containerd or cri-o, [migration documentation was written](docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/).
21+
We didn’t do a great job communicating this, and unfortunately, the deprecation announcement led to some panic within the community. Confusion around what this meant for Docker as a company, if container images built by Docker would still run, and what Docker Engine actually is led to a conflagration on social media. This was our fault; we should have more clearly communicated what was happening and why at the time. To combat this, we released [a blog](/blog/2020/12/02/dont-panic-kubernetes-and-docker/) and [accompanying FAQ](/blog/2020/12/02/dockershim-faq/) to allay the community’s fears and correct some misconceptions about what Docker is and how containers work within Kubernetes. As a result of the community’s concerns, Docker and Mirantis jointly agreed to continue supporting the dockershim code in the form of [cri-dockerd](https://www.mirantis.com/blog/the-future-of-dockershim-is-cri-dockerd/), allowing you to continue using Docker Engine as your container runtime if need be. For the interest of users who want to try other runtimes, like containerd or cri-o, [migration documentation was written](/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/).
2222

2323
We later [surveyed the community](https://kubernetes.io/blog/2021/11/12/are-you-ready-for-dockershim-removal/) and [discovered that there are still many users with questions and concerns](/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim). In response, Kubernetes maintainers and the CNCF committed to addressing these concerns by extending documentation and other programs. In fact, this blog post is a part of this program. With so many end users successfully migrated to other runtimes, and improved documentation, we believe that everyone has a paved way to migration now.
2424

content/en/docs/concepts/architecture/nodes.md

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -312,16 +312,18 @@ controller deletes the node from its list of nodes.
312312
The third is monitoring the nodes' health. The node controller is
313313
responsible for:
314314

315-
- In the case that a node becomes unreachable, updating the NodeReady condition
316-
of within the Node's `.status`. In this case the node controller sets the
317-
NodeReady condition to `ConditionUnknown`.
315+
- In the case that a node becomes unreachable, updating the `Ready` condition
316+
in the Node's `.status` field. In this case the node controller sets the
317+
`Ready` condition to `Unknown`.
318318
- If a node remains unreachable: triggering
319319
[API-initiated eviction](/docs/concepts/scheduling-eviction/api-eviction/)
320320
for all of the Pods on the unreachable node. By default, the node controller
321-
waits 5 minutes between marking the node as `ConditionUnknown` and submitting
321+
waits 5 minutes between marking the node as `Unknown` and submitting
322322
the first eviction request.
323323

324-
The node controller checks the state of each node every `--node-monitor-period` seconds.
324+
By default, the node controller checks the state of each node every 5 seconds.
325+
This period can be configured using the `--node-monitor-period` flag on the
326+
`kube-controller-manager` component.
325327

326328
### Rate limits on eviction
327329

@@ -331,7 +333,7 @@ from more than 1 node per 10 seconds.
331333

332334
The node eviction behavior changes when a node in a given availability zone
333335
becomes unhealthy. The node controller checks what percentage of nodes in the zone
334-
are unhealthy (NodeReady condition is `ConditionUnknown` or `ConditionFalse`) at
336+
are unhealthy (the `Ready` condition is `Unknown` or `False`) at
335337
the same time:
336338

337339
- If the fraction of unhealthy nodes is at least `--unhealthy-zone-threshold`
@@ -384,7 +386,7 @@ If you want to explicitly reserve resources for non-Pod processes, see
384386

385387
## Node topology
386388

387-
{{< feature-state state="alpha" for_k8s_version="v1.16" >}}
389+
{{< feature-state state="beta" for_k8s_version="v1.18" >}}
388390

389391
If you have enabled the `TopologyManager`
390392
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/), then
@@ -412,7 +414,7 @@ enabled by default in 1.21.
412414

413415
Note that by default, both configuration options described below,
414416
`shutdownGracePeriod` and `shutdownGracePeriodCriticalPods` are set to zero,
415-
thus not activating Graceful node shutdown functionality.
417+
thus not activating the graceful node shutdown functionality.
416418
To activate the feature, the two kubelet config settings should be configured appropriately and
417419
set to non-zero values.
418420

content/en/docs/concepts/scheduling-eviction/pod-overhead.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ Once a Pod is scheduled to a node, the kubelet on that node creates a new {{< gl
108108
text="cgroup" term_id="cgroup" >}} for the Pod. It is within this pod that the underlying
109109
container runtime will create containers.
110110

111-
If the resource has a limit defined for each container (Guaranteed QoS or Bustrable QoS with limits defined),
111+
If the resource has a limit defined for each container (Guaranteed QoS or Burstable QoS with limits defined),
112112
the kubelet will set an upper limit for the pod cgroup associated with that resource (cpu.cfs_quota_us for CPU
113113
and memory.limit_in_bytes memory). This upper limit is based on the sum of the container limits plus the `overhead`
114114
defined in the PodSpec.

content/en/docs/concepts/services-networking/ingress.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ A minimal Ingress resource example:
7474

7575
{{< codenew file="service/networking/minimal-ingress.yaml" >}}
7676

77-
As with all other Kubernetes resources, an Ingress needs `apiVersion`, `kind`, and `metadata` fields.
77+
An Ingress needs `apiVersion`, `kind`, `metadata` and `spec` fields.
7878
The name of an Ingress object must be a valid
7979
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
8080
For general information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), [configuring containers](/docs/tasks/configure-pod-container/configure-pod-configmap/), [managing resources](/docs/concepts/cluster-administration/manage-deployment/).

content/en/docs/concepts/workloads/controllers/daemonset.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -76,9 +76,9 @@ A Pod Template in a DaemonSet must have a [`RestartPolicy`](/docs/concepts/workl
7676
The `.spec.selector` field is a pod selector. It works the same as the `.spec.selector` of
7777
a [Job](/docs/concepts/workloads/controllers/job/).
7878

79-
As of Kubernetes 1.8, you must specify a pod selector that matches the labels of the
80-
`.spec.template`. The pod selector will no longer be defaulted when left empty. Selector
81-
defaulting was not compatible with `kubectl apply`. Also, once a DaemonSet is created,
79+
You must specify a pod selector that matches the labels of the
80+
`.spec.template`.
81+
Also, once a DaemonSet is created,
8282
its `.spec.selector` can not be mutated. Mutating the pod selector can lead to the
8383
unintentional orphaning of Pods, and it was found to be confusing to users.
8484

@@ -91,8 +91,8 @@ The `.spec.selector` is an object consisting of two fields:
9191

9292
When the two are specified the result is ANDed.
9393

94-
If the `.spec.selector` is specified, it must match the `.spec.template.metadata.labels`.
95-
Config with these not matching will be rejected by the API.
94+
The `.spec.selector` must match the `.spec.template.metadata.labels`.
95+
Config with these two not matching will be rejected by the API.
9696

9797
### Running Pods on select Nodes
9898

@@ -107,7 +107,7 @@ If you do not specify either, then the DaemonSet controller will create Pods on
107107

108108
### Scheduled by default scheduler
109109

110-
{{< feature-state for_kubernetes_version="1.17" state="stable" >}}
110+
{{< feature-state for_k8s_version="1.17" state="stable" >}}
111111

112112
A DaemonSet ensures that all eligible nodes run a copy of a Pod. Normally, the
113113
node that a Pod runs on is selected by the Kubernetes scheduler. However,

content/en/docs/contribute/style/diagram-guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ weight: 15
88
<!--Overview-->
99

1010
This guide shows you how to create, edit and share diagrams using the Mermaid
11-
Javascript library. Mermaid.js allows you to generate diagrams using a simple
11+
JavaScript library. Mermaid.js allows you to generate diagrams using a simple
1212
markdown-like syntax inside Markdown files. You can also use Mermaid to
1313
generate `.svg` or `.png` image files that you can add to your documentation.
1414

content/en/docs/reference/access-authn-authz/abac.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,13 +33,13 @@ properties:
3333
- `group`, type string; if you specify `group`, it must match one of the groups of the authenticated user. `system:authenticated` matches all authenticated requests. `system:unauthenticated` matches all unauthenticated requests.
3434
- Resource-matching properties:
3535
- `apiGroup`, type string; an API group.
36-
- Ex: `extensions`
36+
- Ex: `apps`, `networking.k8s.io`
3737
- Wildcard: `*` matches all API groups.
3838
- `namespace`, type string; a namespace.
3939
- Ex: `kube-system`
4040
- Wildcard: `*` matches all resource requests.
4141
- `resource`, type string; a resource type
42-
- Ex: `pods`
42+
- Ex: `pods`, `deployments`
4343
- Wildcard: `*` matches all resource requests.
4444
- Non-resource-matching properties:
4545
- `nonResourcePath`, type string; non-resource request paths.

content/en/docs/reference/access-authn-authz/rbac.md

Lines changed: 5 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -384,11 +384,11 @@ rules:
384384
```
385385

386386
Allow reading/writing Deployments (at the HTTP level: objects with `"deployments"`
387-
in the resource part of their URL) in both the `"extensions"` and `"apps"` API groups:
387+
in the resource part of their URL) in the `"apps"` API groups:
388388

389389
```yaml
390390
rules:
391-
- apiGroups: ["extensions", "apps"]
391+
- apiGroups: ["apps"]
392392
#
393393
# at the HTTP level, the name of the resource for accessing Deployment
394394
# objects is "deployments"
@@ -397,7 +397,7 @@ rules:
397397
```
398398

399399
Allow reading Pods in the core API group, as well as reading or writing Job
400-
resources in the `"batch"` or `"extensions"` API groups:
400+
resources in the `"batch"` API group:
401401

402402
```yaml
403403
rules:
@@ -407,7 +407,7 @@ rules:
407407
# objects is "pods"
408408
resources: ["pods"]
409409
verbs: ["get", "list", "watch"]
410-
- apiGroups: ["batch", "extensions"]
410+
- apiGroups: ["batch"]
411411
#
412412
# at the HTTP level, the name of the resource for accessing Job
413413
# objects is "jobs"
@@ -517,23 +517,14 @@ subjects:
517517
namespace: kube-system
518518
```
519519

520-
For all service accounts in the "qa" group in any namespace:
520+
For all service accounts in the "qa" namespace:
521521

522522
```yaml
523523
subjects:
524524
- kind: Group
525525
name: system:serviceaccounts:qa
526526
apiGroup: rbac.authorization.k8s.io
527527
```
528-
For all service accounts in the "dev" group in the "development" namespace:
529-
530-
```yaml
531-
subjects:
532-
- kind: Group
533-
name: system:serviceaccounts:dev
534-
apiGroup: rbac.authorization.k8s.io
535-
namespace: development
536-
```
537528

538529
For all service accounts in any namespace:
539530

0 commit comments

Comments
 (0)