Skip to content

Commit f8d1ef6

Browse files
author
pranav-pandey0804
committed
updated button_path
2 parents 5219863 + 991529e commit f8d1ef6

File tree

26 files changed

+109
-90
lines changed

26 files changed

+109
-90
lines changed

OWNERS_ALIASES

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,14 +49,15 @@ aliases:
4949
- windsonsea
5050
sig-docs-es-owners: # Admins for Spanish content
5151
- 92nqb
52-
- krol3
5352
- electrocucaracha
53+
- krol3
5454
- raelga
5555
- ramrodo
5656
sig-docs-es-reviews: # PR reviews for Spanish content
5757
- 92nqb
58-
- krol3
5958
- electrocucaracha
59+
- jossemarGT
60+
- krol3
6061
- raelga
6162
- ramrodo
6263
sig-docs-fr-owners: # Admins for French content

content/en/blog/_posts/2021-11-12-are-you-ready-for-dockershim-removal/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ time to review the [dockershim migration documentation](/docs/tasks/administer-c
6464
and consult your Kubernetes hosting vendor (if you have one) what container runtime options are available for you.
6565
Read up [container runtime documentation with instructions on how to use containerd and CRI-O](/docs/setup/production-environment/container-runtimes/#container-runtimes)
6666
to help prepare you when you're ready to upgrade to 1.24. CRI-O, containerd, and
67-
Docker with [Mirantis cri-dockerd](https://github.com/Mirantis/cri-dockerd) are
67+
Docker with [Mirantis cri-dockerd](https://mirantis.github.io/cri-dockerd/) are
6868
not the only container runtime options, we encourage you to explore the [CNCF landscape on container runtimes](https://landscape.cncf.io/?group=projects-and-products&view-mode=card#runtime--container-runtime)
6969
in case another suits you better.
7070

content/en/blog/_posts/2022-02-17-updated-dockershim-faq.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ Kubernetes clusters. Containers make this kind of interoperability possible.
109109

110110
Mirantis and Docker have [committed][mirantis] to maintaining a replacement adapter for
111111
Docker Engine, and to maintain that adapter even after the in-tree dockershim is removed
112-
from Kubernetes. The replacement adapter is named [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd).
112+
from Kubernetes. The replacement adapter is named [`cri-dockerd`](https://mirantis.github.io/cri-dockerd/).
113113

114114
You can install `cri-dockerd` and use it to connect the kubelet to Docker Engine. Read [Migrate Docker Engine nodes from dockershim to cri-dockerd](/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd/) to learn more.
115115

content/en/blog/_posts/2023-08-15-pkgs-k8s-io-introduction.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -173,6 +173,8 @@ publishing packages to the Google-hosted repository in the future.
173173
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
174174
```
175175

176+
_Update: In releases older than Debian 12 and Ubuntu 22.04, the folder `/etc/apt/keyrings` does not exist by default, and it should be created before the curl command._
177+
176178
3. Update the `apt` package index:
177179

178180
```shell

content/en/docs/concepts/scheduling-eviction/api-eviction.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -11,11 +11,11 @@ using a client of the {{<glossary_tooltip term_id="kube-apiserver" text="API ser
1111
creates an `Eviction` object, which causes the API server to terminate the Pod.
1212

1313
API-initiated evictions respect your configured [`PodDisruptionBudgets`](/docs/tasks/run-application/configure-pdb/)
14-
and [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination).
14+
and [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination).
1515

1616
Using the API to create an Eviction object for a Pod is like performing a
1717
policy-controlled [`DELETE` operation](/docs/reference/kubernetes-api/workload-resources/pod-v1/#delete-delete-a-pod)
18-
on the Pod.
18+
on the Pod.
1919

2020
## Calling the Eviction API
2121

@@ -75,13 +75,13 @@ checks and responds in one of the following ways:
7575
* `429 Too Many Requests`: the eviction is not currently allowed because of the
7676
configured {{<glossary_tooltip term_id="pod-disruption-budget" text="PodDisruptionBudget">}}.
7777
You may be able to attempt the eviction again later. You might also see this
78-
response because of API rate limiting.
78+
response because of API rate limiting.
7979
* `500 Internal Server Error`: the eviction is not allowed because there is a
8080
misconfiguration, like if multiple PodDisruptionBudgets reference the same Pod.
8181

8282
If the Pod you want to evict isn't part of a workload that has a
8383
PodDisruptionBudget, the API server always returns `200 OK` and allows the
84-
eviction.
84+
eviction.
8585

8686
If the API server allows the eviction, the Pod is deleted as follows:
8787

@@ -103,12 +103,12 @@ If the API server allows the eviction, the Pod is deleted as follows:
103103
## Troubleshooting stuck evictions
104104

105105
In some cases, your applications may enter a broken state, where the Eviction
106-
API will only return `429` or `500` responses until you intervene. This can
107-
happen if, for example, a ReplicaSet creates pods for your application but new
106+
API will only return `429` or `500` responses until you intervene. This can
107+
happen if, for example, a ReplicaSet creates pods for your application but new
108108
pods do not enter a `Ready` state. You may also notice this behavior in cases
109109
where the last evicted Pod had a long termination grace period.
110110

111-
If you notice stuck evictions, try one of the following solutions:
111+
If you notice stuck evictions, try one of the following solutions:
112112

113113
* Abort or pause the automated operation causing the issue. Investigate the stuck
114114
application before you restart the operation.

content/en/docs/concepts/scheduling-eviction/assign-pod-node.md

Lines changed: 31 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ define. Some of the benefits of affinity and anti-affinity include:
9696
The affinity feature consists of two types of affinity:
9797

9898
- *Node affinity* functions like the `nodeSelector` field but is more expressive and
99-
allows you to specify soft rules.
99+
allows you to specify soft rules.
100100
- *Inter-pod affinity/anti-affinity* allows you to constrain Pods against labels
101101
on other Pods.
102102

@@ -254,13 +254,13 @@ the node label that the system uses to denote the domain. For examples, see
254254
[Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/).
255255

256256
{{< note >}}
257-
Inter-pod affinity and anti-affinity require substantial amount of
257+
Inter-pod affinity and anti-affinity require substantial amounts of
258258
processing which can slow down scheduling in large clusters significantly. We do
259259
not recommend using them in clusters larger than several hundred nodes.
260260
{{< /note >}}
261261

262262
{{< note >}}
263-
Pod anti-affinity requires nodes to be consistently labelled, in other words,
263+
Pod anti-affinity requires nodes to be consistently labeled, in other words,
264264
every node in the cluster must have an appropriate label matching `topologyKey`.
265265
If some or all nodes are missing the specified `topologyKey` label, it can lead
266266
to unintended behavior.
@@ -305,22 +305,22 @@ Pod affinity rule uses the "hard"
305305
`requiredDuringSchedulingIgnoredDuringExecution`, while the anti-affinity rule
306306
uses the "soft" `preferredDuringSchedulingIgnoredDuringExecution`.
307307

308-
The affinity rule specifies that the scheduler is allowed to place the example Pod
308+
The affinity rule specifies that the scheduler is allowed to place the example Pod
309309
on a node only if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
310-
where other Pods have been labeled with `security=S1`.
311-
For instance, if we have a cluster with a designated zone, let's call it "Zone V,"
312-
consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can
313-
assign the Pod to any node within Zone V, as long as there is at least one Pod within
314-
Zone V already labeled with `security=S1`. Conversely, if there are no Pods with `security=S1`
310+
where other Pods have been labeled with `security=S1`.
311+
For instance, if we have a cluster with a designated zone, let's call it "Zone V,"
312+
consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can
313+
assign the Pod to any node within Zone V, as long as there is at least one Pod within
314+
Zone V already labeled with `security=S1`. Conversely, if there are no Pods with `security=S1`
315315
labels in Zone V, the scheduler will not assign the example Pod to any node in that zone.
316316

317-
The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Pod
317+
The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Pod
318318
on a node if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
319-
where other Pods have been labeled with `security=S2`.
320-
For instance, if we have a cluster with a designated zone, let's call it "Zone R,"
321-
consisting of nodes labeled with `topology.kubernetes.io/zone=R`, the scheduler should avoid
322-
assigning the Pod to any node within Zone R, as long as there is at least one Pod within
323-
Zone R already labeled with `security=S2`. Conversely, the anti-affinity rule does not impact
319+
where other Pods have been labeled with `security=S2`.
320+
For instance, if we have a cluster with a designated zone, let's call it "Zone R,"
321+
consisting of nodes labeled with `topology.kubernetes.io/zone=R`, the scheduler should avoid
322+
assigning the Pod to any node within Zone R, as long as there is at least one Pod within
323+
Zone R already labeled with `security=S2`. Conversely, the anti-affinity rule does not impact
324324
scheduling into Zone R if there are no Pods with `security=S2` labels.
325325

326326
To get yourself more familiar with the examples of Pod affinity and anti-affinity,
@@ -364,19 +364,19 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi
364364

365365
{{< note >}}
366366
<!-- UPDATE THIS WHEN PROMOTING TO BETA -->
367-
The `matchLabelKeys` field is a alpha-level field and is disabled by default in
367+
The `matchLabelKeys` field is an alpha-level field and is disabled by default in
368368
Kubernetes {{< skew currentVersion >}}.
369369
When you want to use it, you have to enable it via the
370370
`MatchLabelKeysInPodAffinity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
371371
{{< /note >}}
372372

373373
Kubernetes includes an optional `matchLabelKeys` field for Pod affinity
374-
or anti-affinity. The field specifies keys for the labels that should match with the incoming Pod's labels,
374+
or anti-affinity. The field specifies keys for the labels that should match with the incoming Pod's labels,
375375
when satisfying the Pod (anti)affinity.
376376

377377
The keys are used to look up values from the pod labels; those key-value labels are combined
378378
(using `AND`) with the match restrictions defined using the `labelSelector` field. The combined
379-
filtering selects the set of existing pods that will be taken into Pod (anti)affinity calculation.
379+
filtering selects the set of existing pods that will be taken into Pod (anti)affinity calculation.
380380

381381
A common use case is to use `matchLabelKeys` with `pod-template-hash` (set on Pods
382382
managed as part of a Deployment, where the value is unique for each revision).
@@ -405,7 +405,7 @@ spec:
405405
# Only Pods from a given rollout are taken into consideration when calculating pod affinity.
406406
# If you update the Deployment, the replacement Pods follow their own affinity rules
407407
# (if there are any defined in the new Pod template)
408-
matchLabelKeys:
408+
matchLabelKeys:
409409
- pod-template-hash
410410
```
411411

@@ -415,14 +415,14 @@ spec:
415415

416416
{{< note >}}
417417
<!-- UPDATE THIS WHEN PROMOTING TO BETA -->
418-
The `mismatchLabelKeys` field is a alpha-level field and is disabled by default in
418+
The `mismatchLabelKeys` field is an alpha-level field and is disabled by default in
419419
Kubernetes {{< skew currentVersion >}}.
420420
When you want to use it, you have to enable it via the
421421
`MatchLabelKeysInPodAffinity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
422422
{{< /note >}}
423423

424424
Kubernetes includes an optional `mismatchLabelKeys` field for Pod affinity
425-
or anti-affinity. The field specifies keys for the labels that should **not** match with the incoming Pod's labels,
425+
or anti-affinity. The field specifies keys for the labels that should **not** match with the incoming Pod's labels,
426426
when satisfying the Pod (anti)affinity.
427427

428428
One example use case is to ensure Pods go to the topology domain (node, zone, etc) where only Pods from the same tenant or team are scheduled in.
@@ -438,22 +438,22 @@ metadata:
438438
...
439439
spec:
440440
affinity:
441-
podAffinity:
441+
podAffinity:
442442
requiredDuringSchedulingIgnoredDuringExecution:
443443
# ensure that pods associated with this tenant land on the correct node pool
444444
- matchLabelKeys:
445445
- tenant
446446
topologyKey: node-pool
447-
podAntiAffinity:
447+
podAntiAffinity:
448448
requiredDuringSchedulingIgnoredDuringExecution:
449449
# ensure that pods associated with this tenant can't schedule to nodes used for another tenant
450450
- mismatchLabelKeys:
451-
- tenant # whatever the value of the "tenant" label for this Pod, prevent
451+
- tenant # whatever the value of the "tenant" label for this Pod, prevent
452452
# scheduling to nodes in any pool where any Pod from a different
453453
# tenant is running.
454454
labelSelector:
455455
# We have to have the labelSelector which selects only Pods with the tenant label,
456-
# otherwise this Pod would hate Pods from daemonsets as well, for example,
456+
# otherwise this Pod would hate Pods from daemonsets as well, for example,
457457
# which aren't supposed to have the tenant label.
458458
matchExpressions:
459459
- key: tenant
@@ -561,7 +561,7 @@ where each web server is co-located with a cache, on three separate nodes.
561561
| *webserver-1* | *webserver-2* | *webserver-3* |
562562
| *cache-1* | *cache-2* | *cache-3* |
563563

564-
The overall effect is that each cache instance is likely to be accessed by a single client, that
564+
The overall effect is that each cache instance is likely to be accessed by a single client that
565565
is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency.
566566

567567
You might have other reasons to use Pod anti-affinity.
@@ -589,7 +589,7 @@ Some of the limitations of using `nodeName` to select nodes are:
589589
{{< note >}}
590590
`nodeName` is intended for use by custom schedulers or advanced use cases where
591591
you need to bypass any configured schedulers. Bypassing the schedulers might lead to
592-
failed Pods if the assigned Nodes get oversubscribed. You can use [node affinity](#node-affinity) or a the [`nodeselector` field](#nodeselector) to assign a Pod to a specific Node without bypassing the schedulers.
592+
failed Pods if the assigned Nodes get oversubscribed. You can use the [node affinity](#node-affinity) or the [`nodeselector` field](#nodeselector) to assign a Pod to a specific Node without bypassing the schedulers.
593593
{{</ note >}}
594594

595595
Here is an example of a Pod spec using the `nodeName` field:
@@ -633,13 +633,13 @@ The following operators can only be used with `nodeAffinity`.
633633

634634
| Operator | Behaviour |
635635
| :------------: | :-------------: |
636-
| `Gt` | The supplied value will be parsed as an integer, and that integer is less than the integer that results from parsing the value of a label named by this selector |
637-
| `Lt` | The supplied value will be parsed as an integer, and that integer is greater than the integer that results from parsing the value of a label named by this selector |
636+
| `Gt` | The supplied value will be parsed as an integer, and that integer is less than the integer that results from parsing the value of a label named by this selector |
637+
| `Lt` | The supplied value will be parsed as an integer, and that integer is greater than the integer that results from parsing the value of a label named by this selector |
638638

639639

640640
{{<note>}}
641-
`Gt` and `Lt` operators will not work with non-integer values. If the given value
642-
doesn't parse as an integer, the pod will fail to get scheduled. Also, `Gt` and `Lt`
641+
`Gt` and `Lt` operators will not work with non-integer values. If the given value
642+
doesn't parse as an integer, the pod will fail to get scheduled. Also, `Gt` and `Lt`
643643
are not available for `podAffinity`.
644644
{{</note>}}
645645

content/en/docs/concepts/scheduling-eviction/dynamic-resource-allocation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,14 +41,14 @@ ResourceClass
4141
driver.
4242

4343
ResourceClaim
44-
: Defines a particular resource instances that is required by a
44+
: Defines a particular resource instance that is required by a
4545
workload. Created by a user (lifecycle managed manually, can be shared
4646
between different Pods) or for individual Pods by the control plane based on
4747
a ResourceClaimTemplate (automatic lifecycle, typically used by just one
4848
Pod).
4949

5050
ResourceClaimTemplate
51-
: Defines the spec and some meta data for creating
51+
: Defines the spec and some metadata for creating
5252
ResourceClaims. Created by a user when deploying a workload.
5353

5454
PodSchedulingContext

content/en/docs/concepts/scheduling-eviction/kube-scheduler.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ kube-scheduler selects a node for the pod in a 2-step operation:
6262

6363
The _filtering_ step finds the set of Nodes where it's feasible to
6464
schedule the Pod. For example, the PodFitsResources filter checks whether a
65-
candidate Node has enough available resource to meet a Pod's specific
65+
candidate Node has enough available resources to meet a Pod's specific
6666
resource requests. After this step, the node list contains any suitable
6767
Nodes; often, there will be more than one. If the list is empty, that
6868
Pod isn't (yet) schedulable.

content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@ The kubelet has the following default hard eviction thresholds:
171171
- `nodefs.inodesFree<5%` (Linux nodes)
172172

173173
These default values of hard eviction thresholds will only be set if none
174-
of the parameters is changed. If you changed the value of any parameter,
174+
of the parameters is changed. If you change the value of any parameter,
175175
then the values of other parameters will not be inherited as the default
176176
values and will be set to zero. In order to provide custom values, you
177177
should provide all the thresholds respectively.

0 commit comments

Comments
 (0)