Skip to content

Commit b284960

Browse files
authored
Merge pull request #46964 from chanieljdan/merged-main-dev-1.31
Merged main dev 1.31
2 parents 918877e + 0568df9 commit b284960

File tree

65 files changed

+20923
-254
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

65 files changed

+20923
-254
lines changed

OWNERS_ALIASES

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,7 @@ aliases:
6363
- salaxander
6464
- sftim
6565
- tengqm
66+
- Princesso # RT 1.31 Docs Lead
6667
sig-docs-en-reviews: # PR reviews for English content
6768
- dipesh-rawat
6869
- divya-mohan0209
@@ -78,6 +79,7 @@ aliases:
7879
- shannonxtreme
7980
- tengqm
8081
- windsonsea
82+
- Princesso # RT 1.31 Docs Lead
8183
sig-docs-es-owners: # Admins for Spanish content
8284
- electrocucaracha
8385
- krol3

content/en/blog/_posts/2023-10-12-bootstrap-an-air-gapped-cluster-with-kubeadm/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -658,7 +658,7 @@ From the air gapped VM, switch into the ~/tmp directory where all of the artifac
658658
```bash
659659
cd ~/tmp
660660
```
661-
Set `$KUBECONFIG` to a file with credentials for the local cluster; also set the the Zarf version:
661+
Set `$KUBECONFIG` to a file with credentials for the local cluster; also set the Zarf version:
662662
```bash
663663
export KUBECONFIG=/etc/kubernetes/admin.conf
664664

content/en/docs/concepts/services-networking/topology-aware-routing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@ yet cover some relevant and plausible situations.
199199

200200
* Follow the [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/) tutorial
201201
* Learn about the
202-
[trafficDistribution](/docs/concepts/services-networking/service/#trafic-distribution)
202+
[trafficDistribution](/docs/concepts/services-networking/service/#traffic-distribution)
203203
field, which is closely related to the `service.kubernetes.io/topology-mode`
204204
annotation and provides flexible options for traffic routing within
205205
Kubernetes.

content/en/docs/concepts/storage/storage-classes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ When a PVC does not specify a `storageClassName`, the default StorageClass is
5454
used.
5555

5656
If you set the
57-
[`storageclass.kubernetes.io/is-default-class`](/docs/reference/labels-annotations-taints/#ingressclass-kubernetes-io-is-default-class)
57+
[`storageclass.kubernetes.io/is-default-class`](/docs/reference/labels-annotations-taints/#storageclass-kubernetes-io-is-default-class)
5858
annotation to true on more than one StorageClass in your cluster, and you then
5959
create a PersistentVolumeClaim with no `storageClassName` set, Kubernetes
6060
uses the most recently created default StorageClass.

content/en/docs/concepts/workloads/pods/pod-lifecycle.md

Lines changed: 41 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -511,7 +511,7 @@ processes, and the Pod is then deleted from the
511511
container runtime's management service is restarted while waiting for processes to terminate, the
512512
cluster retries from the start including the full original grace period.
513513

514-
An example flow:
514+
Pod termination flow, illustrated with an example:
515515

516516
1. You use the `kubectl` tool to manually delete a specific Pod, with the default grace period
517517
(30 seconds).
@@ -530,18 +530,19 @@ An example flow:
530530

531531
If the `preStop` hook is still running after the grace period expires, the kubelet requests
532532
a small, one-off grace period extension of 2 seconds.
533-
534-
{{< note >}}
535-
If the `preStop` hook needs longer to complete than the default grace period allows,
536-
you must modify `terminationGracePeriodSeconds` to suit this.
537-
{{< /note >}}
533+
{{% note %}}
534+
If the `preStop` hook needs longer to complete than the default grace period allows,
535+
you must modify `terminationGracePeriodSeconds` to suit this.
536+
{{% /note %}}
538537

539538
1. The kubelet triggers the container runtime to send a TERM signal to process 1 inside each
540539
container.
541-
{{< note >}}
542-
The containers in the Pod receive the TERM signal at different times and in an arbitrary
543-
order. If the order of shutdowns matters, consider using a `preStop` hook to synchronize.
544-
{{< /note >}}
540+
541+
There is [special ordering](#termination-with-sidecars) if the Pod has any
542+
{{< glossary_tooltip text="sidecar containers" term_id="sidecar-container" >}} defined.
543+
Otherwise, the containers in the Pod receive the TERM signal at different times and in
544+
an arbitrary order. If the order of shutdowns matters, consider using a `preStop` hook
545+
to synchronize (or switch to using sidecar containers).
545546

546547
1. At the same time as the kubelet is starting graceful shutdown of the Pod, the control plane
547548
evaluates whether to remove that shutting-down Pod from EndpointSlice (and Endpoints) objects,
@@ -565,38 +566,19 @@ An example flow:
565566
condition `serving`. You can find more details on how to implement connections draining in the
566567
tutorial [Pods And Endpoints Termination Flow](/docs/tutorials/services/pods-and-endpoint-termination-flow/)
567568

568-
{{<note>}}
569-
If you don't have the `EndpointSliceTerminatingCondition` feature gate enabled
570-
in your cluster (the gate is on by default from Kubernetes 1.22, and locked to default in 1.26),
571-
then the Kubernetes control plane removes a Pod from any relevant EndpointSlices as soon as the Pod's
572-
termination grace period _begins_. The behavior above is described when the
573-
feature gate `EndpointSliceTerminatingCondition` is enabled.
574-
{{</note>}}
575-
576-
{{<note>}}
577-
Beginning with Kubernetes 1.29, if your Pod includes one or more sidecar containers
578-
(init containers with an Always restart policy), the kubelet will delay sending
579-
the TERM signal to these sidecar containers until the last main container has fully terminated.
580-
The sidecar containers will be terminated in the reverse order they are defined in the Pod spec.
581-
This ensures that sidecar containers continue serving the other containers in the Pod until they are no longer needed.
582-
583-
Note that slow termination of a main container will also delay the termination of the sidecar containers.
584-
If the grace period expires before the termination process is complete, the Pod may enter emergency termination.
585-
In this case, all remaining containers in the Pod will be terminated simultaneously with a short grace period.
569+
<a id="pod-termination-beyond-grace-period" />
586570

587-
Similarly, if the Pod has a preStop hook that exceeds the termination grace period, emergency termination may occur.
588-
In general, if you have used preStop hooks to control the termination order without sidecar containers, you can now
589-
remove them and allow the kubelet to manage sidecar termination automatically.
590-
{{</note>}}
571+
1. The kubelet ensures the Pod is shut down and terminated
572+
1. When the grace period expires, if there is still any container running in the Pod, the
573+
kubelet triggers forcible shutdown.
574+
The container runtime sends `SIGKILL` to any processes still running in any container in the Pod.
575+
The kubelet also cleans up a hidden `pause` container if that container runtime uses one.
576+
1. The kubelet transitions the Pod into a terminal phase (`Failed` or `Succeeded` depending on
577+
the end state of its containers).
578+
1. The kubelet triggers forcible removal of the Pod object from the API server, by setting grace period
579+
to 0 (immediate deletion).
580+
1. The API server deletes the Pod's API object, which is then no longer visible from any client.
591581

592-
1. When the grace period expires, the kubelet triggers forcible shutdown. The container runtime sends
593-
`SIGKILL` to any processes still running in any container in the Pod.
594-
The kubelet also cleans up a hidden `pause` container if that container runtime uses one.
595-
1. The kubelet transitions the Pod into a terminal phase (`Failed` or `Succeeded` depending on
596-
the end state of its containers). This step is guaranteed since version 1.27.
597-
1. The kubelet triggers forcible removal of Pod object from the API server, by setting grace period
598-
to 0 (immediate deletion).
599-
1. The API server deletes the Pod's API object, which is then no longer visible from any client.
600582

601583
### Forced Pod termination {#pod-termination-forced}
602584

@@ -612,10 +594,8 @@ Setting the grace period to `0` forcibly and immediately deletes the Pod from th
612594
server. If the Pod was still running on a node, that forcible deletion triggers the kubelet to
613595
begin immediate cleanup.
614596

615-
{{< note >}}
616-
You must specify an additional flag `--force` along with `--grace-period=0`
597+
Using kubectl, You must specify an additional flag `--force` along with `--grace-period=0`
617598
in order to perform force deletions.
618-
{{< /note >}}
619599

620600
When a force deletion is performed, the API server does not wait for confirmation
621601
from the kubelet that the Pod has been terminated on the node it was running on. It
@@ -632,6 +612,24 @@ If you need to force-delete Pods that are part of a StatefulSet, refer to the ta
632612
documentation for
633613
[deleting Pods from a StatefulSet](/docs/tasks/run-application/force-delete-stateful-set-pod/).
634614

615+
### Pod shutdown and sidecar containers {##termination-with-sidecars}
616+
617+
If your Pod includes one or more
618+
[sidecar containers](/docs/concepts/workloads/pods/sidecar-containers/)
619+
(init containers with an Always restart policy), the kubelet will delay sending
620+
the TERM signal to these sidecar containers until the last main container has fully terminated.
621+
The sidecar containers will be terminated in the reverse order they are defined in the Pod spec.
622+
This ensures that sidecar containers continue serving the other containers in the Pod until they
623+
are no longer needed.
624+
625+
This means that slow termination of a main container will also delay the termination of the sidecar containers.
626+
If the grace period expires before the termination process is complete, the Pod may enter [forced termination](#pod-termination-beyond-grace-period).
627+
In this case, all remaining containers in the Pod will be terminated simultaneously with a short grace period.
628+
629+
Similarly, if the Pod has a `preStop` hook that exceeds the termination grace period, emergency termination may occur.
630+
In general, if you have used `preStop` hooks to control the termination order without sidecar containers, you can now
631+
remove them and allow the kubelet to manage sidecar termination automatically.
632+
635633
### Garbage collection of Pods {#pod-garbage-collection}
636634

637635
For failed Pods, the API objects remain in the cluster's API until a human or

content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -385,6 +385,20 @@ for information on how to add members into an existing cluster.
385385

386386
## Restoring an etcd cluster
387387

388+
{{< caution >}}
389+
If any API servers are running in your cluster, you should not attempt to
390+
restore instances of etcd. Instead, follow these steps to restore etcd:
391+
392+
- stop *all* API server instances
393+
- restore state in all etcd instances
394+
- restart all API server instances
395+
396+
The Kubernetes project also recommends restarting Kubernetes components (`kube-scheduler`,
397+
`kube-controller-manager`, `kubelet`) to ensure that they don't rely on some
398+
stale data. In practice the restore takes a bit of time. During the
399+
restoration, critical components will lose leader lock and restart themselves.
400+
{{< /caution >}}
401+
388402
etcd supports restoring from snapshots that are taken from an etcd process of
389403
the [major.minor](http://semver.org/) version. Restoring a version from a
390404
different patch version of etcd is also supported. A restore operation is
@@ -443,42 +457,28 @@ current state. Although the scheduled pods might continue to run, no new pods
443457
can be scheduled. In such cases, recover the etcd cluster and potentially
444458
reconfigure Kubernetes API servers to fix the issue.
445459

446-
{{< note >}}
447-
If any API servers are running in your cluster, you should not attempt to
448-
restore instances of etcd. Instead, follow these steps to restore etcd:
449-
450-
- stop *all* API server instances
451-
- restore state in all etcd instances
452-
- restart all API server instances
453-
454-
We also recommend restarting any components (e.g. `kube-scheduler`,
455-
`kube-controller-manager`, `kubelet`) to ensure that they don't rely on some
456-
stale data. Note that in practice, the restore takes a bit of time. During the
457-
restoration, critical components will lose leader lock and restart themselves.
458-
{{< /note >}}
459460

460461
## Upgrading etcd clusters
461462

463+
{{< caution >}}
464+
Before you start an upgrade, back up your etcd cluster first.
465+
{{< /caution >}}
462466

463-
For more details on etcd upgrade, please refer to the [etcd upgrades](https://etcd.io/docs/latest/upgrades/) documentation.
464-
465-
{{< note >}}
466-
Before you start an upgrade, please back up your etcd cluster first.
467-
{{< /note >}}
467+
For details on etcd upgrade, refer to the [etcd upgrades](https://etcd.io/docs/latest/upgrades/) documentation.
468468

469469
## Maintaining etcd clusters
470470

471471
For more details on etcd maintenance, please refer to the [etcd maintenance](https://etcd.io/docs/latest/op-guide/maintenance/) documentation.
472472

473+
### Cluster defragmentation
474+
473475
{{% thirdparty-content single="true" %}}
474476

475-
{{< note >}}
476477
Defragmentation is an expensive operation, so it should be executed as infrequently
477478
as possible. On the other hand, it's also necessary to make sure any etcd member
478479
will not exceed the storage quota. The Kubernetes project recommends that when
479480
you perform defragmentation, you use a tool such as [etcd-defrag](https://github.com/ahrtr/etcd-defrag).
480481

481482
You can also run the defragmentation tool as a Kubernetes CronJob, to make sure that
482483
defragmentation happens regularly. See [`etcd-defrag-cronjob.yaml`](https://github.com/ahrtr/etcd-defrag/blob/main/doc/etcd-defrag-cronjob.yaml)
483-
for details.
484-
{{< /note >}}
484+
for details.

content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -873,7 +873,7 @@ with the `response` stanza populated, serialized to JSON.
873873
If conversion succeeds, a webhook should return a `response` stanza containing the following fields:
874874
* `uid`, copied from the `request.uid` sent to the webhook
875875
* `result`, set to `{"status":"Success"}`
876-
* `convertedObjects`, containing all of the objects from `request.objects`, converted to `request.desiredVersion`
876+
* `convertedObjects`, containing all of the objects from `request.objects`, converted to `request.desiredAPIVersion`
877877

878878
Example of a minimal successful response from a webhook:
879879

content/es/docs/concepts/architecture/leases.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Analizando a detalle, cada hearbeat es una solicitud **update** a este objeto `L
2424
el campo `spec.renewTime` del objeto Lease. El plano de control de Kubernetes utiliza la marca de tiempo de este campo
2525
para determinar la disponibilidad de este «Nodo».
2626

27-
Ve [Objetos Lease de nodos](/docs/concepts/architecture/nodes/#heartbeats) para más detalles.
27+
Ve [Objetos Lease de nodos](/docs/concepts/architecture/nodes/#node-heartbeats) para más detalles.
2828

2929
## Elección del líder
3030

0 commit comments

Comments
 (0)