Skip to content

Commit ad150d9

Browse files
authored
Merge pull request #31758 from nate-double-u/merged-main-dev-1.24
Merged main into dev 1.24
2 parents c3c3b1a + 8b9e77d commit ad150d9

File tree

82 files changed

+4931
-1701
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

82 files changed

+4931
-1701
lines changed

assets/scss/_custom.scss

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -402,6 +402,11 @@ body {
402402
color: #000;
403403
}
404404

405+
.deprecation-warning.outdated-blog, .pageinfo.deprecation-warning.outdated-blog {
406+
background-color: $blue;
407+
color: $white;
408+
}
409+
405410
body.td-home .deprecation-warning, body.td-blog .deprecation-warning, body.td-documentation .deprecation-warning {
406411
border-radius: 3px;
407412
}

content/en/docs/concepts/cluster-administration/logging.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,9 @@ weight: 60
1212
Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism. Likewise, container engines are designed to support logging. The easiest and most adopted logging method for containerized applications is writing to standard output and standard error streams.
1313

1414
However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution.
15+
1516
For example, you may want to access your application's logs if a container crashes, a pod gets evicted, or a node dies.
17+
1618
In a cluster, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called _cluster-level logging_.
1719

1820
<!-- body -->
@@ -141,7 +143,7 @@ as a `DaemonSet`.
141143

142144
Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
143145

144-
Containers write stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
146+
Containers write to stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
145147

146148
### Using a sidecar container with the logging agent {#sidecar-container-with-logging-agent}
147149

content/en/docs/concepts/configuration/manage-resources-containers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -801,6 +801,6 @@ memory limit (and possibly request) for that container.
801801
* Get hands-on experience [assigning CPU resources to containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/).
802802
* Read how the API reference defines a [container](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)
803803
and its [resource requirements](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources)
804-
* Read about [project quotas](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS
804+
* Read about [project quotas](https://xfs.org/index.php/XFS_FAQ#Q:_Quota:_Do_quotas_work_on_XFS.3F) in XFS
805805
* Read more about the [kube-scheduler configuration reference (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
806806

content/en/docs/concepts/containers/container-lifecycle-hooks.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -105,22 +105,22 @@ The logs for a Hook handler are not exposed in Pod events.
105105
If a handler fails for some reason, it broadcasts an event.
106106
For `PostStart`, this is the `FailedPostStartHook` event,
107107
and for `PreStop`, this is the `FailedPreStopHook` event.
108-
You can see these events by running `kubectl describe pod <pod_name>`.
109-
Here is some example output of events from running this command:
108+
To generate a failed `FailedPreStopHook` event yourself, modify the [lifecycle-events.yaml](https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/lifecycle-events.yaml) file to change the postStart command to "badcommand" and apply it.
109+
Here is some example output of the resulting events you see from running `kubectl describe pod lifecycle-demo`:
110110

111111
```
112112
Events:
113-
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
114-
--------- -------- ----- ---- ------------- -------- ------ -------
115-
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd
116-
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0"
117-
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Created Created container with docker id 5c6a256a2567; Security:[seccomp=unconfined]
118-
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulled Successfully pulled image "test:1.0"
119-
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Started Started container with docker id 5c6a256a2567
120-
38s 38s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1
121-
37s 37s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1
122-
38s 37s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "main" with RunContainerError: "PostStart handler: Error executing in Docker Container: 1"
123-
1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook
113+
Type Reason Age From Message
114+
---- ------ ---- ---- -------
115+
Normal Scheduled 7s default-scheduler Successfully assigned default/lifecycle-demo to ip-XXX-XXX-XX-XX.us-east-2...
116+
Normal Pulled 6s kubelet Successfully pulled image "nginx" in 229.604315ms
117+
Normal Pulling 4s (x2 over 6s) kubelet Pulling image "nginx"
118+
Normal Created 4s (x2 over 5s) kubelet Created container lifecycle-demo-container
119+
Normal Started 4s (x2 over 5s) kubelet Started container lifecycle-demo-container
120+
Warning FailedPostStartHook 4s (x2 over 5s) kubelet Exec lifecycle hook ([badcommand]) for Container "lifecycle-demo-container" in Pod "lifecycle-demo_default(30229739-9651-4e5a-9a32-a8f1688862db)" failed - error: command 'badcommand' exited with 126: , message: "OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: \"badcommand\": executable file not found in $PATH: unknown\r\n"
121+
Normal Killing 4s (x2 over 5s) kubelet FailedPostStartHook
122+
Normal Pulled 4s kubelet Successfully pulled image "nginx" in 215.66395ms
123+
Warning BackOff 2s (x2 over 3s) kubelet Back-off restarting failed container
124124
```
125125

126126

content/en/docs/concepts/overview/working-with-objects/finalizers.md

Lines changed: 13 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -21,18 +21,21 @@ your own.
2121

2222
When you create a resource using a manifest file, you can specify finalizers in
2323
the `metadata.finalizers` field. When you attempt to delete the resource, the
24-
controller that manages it notices the values in the `finalizers` field and does
25-
the following:
24+
API server handling the delete request notices the values in the `finalizers` field
25+
and does the following:
2626

2727
* Modifies the object to add a `metadata.deletionTimestamp` field with the
2828
time you started the deletion.
29-
* Marks the object as read-only until its `metadata.finalizers` field is empty.
29+
* Prevents the object from being removed until its `metadata.finalizers` field is empty.
30+
* Returns a `202` status code (HTTP "Accepted")
3031

32+
The controller managing that finalizer notices the update to the object setting the
33+
`metadata.deletionTimestamp`, indicating deletion of the object has been requested.
3134
The controller then attempts to satisfy the requirements of the finalizers
3235
specified for that resource. Each time a finalizer condition is satisfied, the
3336
controller removes that key from the resource's `finalizers` field. When the
34-
field is empty, garbage collection continues. You can also use finalizers to
35-
prevent deletion of unmanaged resources.
37+
`finalizers` field is emptied, an object with a `deletionTimestamp` field set
38+
is automatically deleted. You can also use finalizers to prevent deletion of unmanaged resources.
3639

3740
A common example of a finalizer is `kubernetes.io/pv-protection`, which prevents
3841
accidental deletion of `PersistentVolume` objects. When a `PersistentVolume`
@@ -63,16 +66,18 @@ Kubernetes also processes finalizers when it identifies owner references on a
6366
resource targeted for deletion.
6467

6568
In some situations, finalizers can block the deletion of dependent objects,
66-
which can cause the targeted owner object to remain in a read-only state for
69+
which can cause the targeted owner object to remain for
6770
longer than expected without being fully deleted. In these situations, you
6871
should check finalizers and owner references on the target owner and dependent
6972
objects to troubleshoot the cause.
7073

7174
{{<note>}}
72-
In cases where objects are stuck in a deleting state, try to avoid manually
75+
In cases where objects are stuck in a deleting state, avoid manually
7376
removing finalizers to allow deletion to continue. Finalizers are usually added
7477
to resources for a reason, so forcefully removing them can lead to issues in
75-
your cluster.
78+
your cluster. This should only be done when the purpose of the finalizer is
79+
understood and is accomplished in another way (for example, manually cleaning
80+
up some dependent object).
7681
{{</note>}}
7782

7883
## {{% heading "whatsnext" %}}

content/en/docs/concepts/services-networking/ingress-controllers.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
4848
is an ingress controller driving [Kong Gateway](https://konghq.com/kong/).
4949
* The [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx-ingress-controller/)
5050
works with the [NGINX](https://www.nginx.com/resources/glossary/nginx/) webserver (as a proxy).
51+
* The [Pomerium Ingress Controller](https://www.pomerium.com/docs/k8s/ingress.html) is based on [Pomerium](https://pomerium.com/), which offers context-aware access policy.
5152
* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) HTTP router and reverse proxy for service composition, including use cases like Kubernetes Ingress, designed as a library to build your custom proxy.
5253
* The [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) is an
5354
ingress controller for the [Traefik](https://traefik.io/traefik/) proxy.

content/en/docs/concepts/services-networking/topology-aware-hints.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Routing". When calculating the endpoints for a {{< glossary_tooltip term_id="Ser
3030
the EndpointSlice controller considers the topology (region and zone) of each endpoint
3131
and populates the hints field to allocate it to a zone.
3232
Cluster components such as the {{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}}
33-
can then consume those hints, and use them to influence how traffic to is routed
33+
can then consume those hints, and use them to influence how the traffic is routed
3434
(favoring topologically closer endpoints).
3535

3636
## Using Topology Aware Hints

content/en/docs/concepts/workloads/controllers/job.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -308,7 +308,7 @@ cleaned up by CronJobs based on the specified capacity-based cleanup policy.
308308

309309
### TTL mechanism for finished Jobs
310310

311-
{{< feature-state for_k8s_version="v1.21" state="beta" >}}
311+
{{< feature-state for_k8s_version="v1.23" state="stable" >}}
312312

313313
Another way to clean up finished Jobs (either `Complete` or `Failed`)
314314
automatically is to use a TTL mechanism provided by a

content/en/docs/contribute/review/reviewing-prs.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Before you start a review:
3636

3737
## Review process
3838

39-
In general, review pull requests for content and style in English. The figure below outlines the steps for the review process. The details for each step follow.
39+
In general, review pull requests for content and style in English. Figure 1 outlines the steps for the review process. The details for each step follow.
4040

4141
<!-- See https://github.com/kubernetes/website/issues/28808 for live-editor URL to this figure -->
4242
<!-- You can also cut/paste the mermaid code into the live editor at https://mermaid-js.github.io/mermaid-live-editor to play around with it -->
@@ -67,7 +67,7 @@ class S,T spacewhite
6767
class third,fourth white
6868
{{</ mermaid >}}
6969

70-
***Figure - Review process steps***
70+
Figure 1. Review process steps.
7171

7272
1. Go to
7373
[https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls).

0 commit comments

Comments
 (0)