Skip to content

Commit e5b5f45

Browse files
authored
Merge pull request #28408 from tengqm/fix-links-1
Fix some links in the tasks section
2 parents 3d35f77 + 8282ae3 commit e5b5f45

File tree

4 files changed

+63
-76
lines changed

4 files changed

+63
-76
lines changed

content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md

Lines changed: 21 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@ card:
77
weight: 40
88
---
99

10-
1110
<!-- overview -->
1211

1312
This page shows how to configure access to multiple clusters by using
@@ -21,20 +20,15 @@ a *kubeconfig file*. This is a generic way of referring to configuration files.
2120
It does not mean that there is a file named `kubeconfig`.
2221
{{< /note >}}
2322

24-
25-
2623
## {{% heading "prerequisites" %}}
2724

28-
2925
{{< include "task-tutorial-prereqs.md" >}}
3026

3127
To check that {{< glossary_tooltip text="kubectl" term_id="kubectl" >}} is installed,
3228
run `kubectl version --client`. The kubectl version should be
33-
[within one minor version](/docs/setup/release/version-skew-policy/#kubectl) of your
29+
[within one minor version](/releases/version-skew-policy/#kubectl) of your
3430
cluster's API server.
3531

36-
37-
3832
<!-- steps -->
3933

4034
## Define clusters, users, and contexts
@@ -186,7 +180,7 @@ kubectl config --kubeconfig=config-demo view --minify
186180

187181
The output shows configuration information associated with the `dev-frontend` context:
188182

189-
```shell
183+
```yaml
190184
apiVersion: v1
191185
clusters:
192186
- cluster:
@@ -238,7 +232,6 @@ kubectl config --kubeconfig=config-demo use-context dev-storage
238232

239233
View configuration associated with the new current context, `dev-storage`.
240234

241-
242235
```shell
243236
kubectl config --kubeconfig=config-demo view --minify
244237
```
@@ -247,7 +240,7 @@ kubectl config --kubeconfig=config-demo view --minify
247240

248241
In your `config-exercise` directory, create a file named `config-demo-2` with this content:
249242

250-
```shell
243+
```yaml
251244
apiVersion: v1
252245
kind: Config
253246
preferences: {}
@@ -269,13 +262,17 @@ current value of your `KUBECONFIG` environment variable, so you can restore it l
269262
For example:
270263

271264
### Linux
265+
272266
```shell
273267
export KUBECONFIG_SAVED=$KUBECONFIG
274268
```
269+
275270
### Windows PowerShell
276-
```shell
271+
272+
```powershell
277273
$Env:KUBECONFIG_SAVED=$ENV:KUBECONFIG
278274
```
275+
279276
The `KUBECONFIG` environment variable is a list of paths to configuration files. The list is
280277
colon-delimited for Linux and Mac, and semicolon-delimited for Windows. If you have
281278
a `KUBECONFIG` environment variable, familiarize yourself with the configuration files
@@ -284,11 +281,14 @@ in the list.
284281
Temporarily append two paths to your `KUBECONFIG` environment variable. For example:
285282

286283
### Linux
284+
287285
```shell
288286
export KUBECONFIG=$KUBECONFIG:config-demo:config-demo-2
289287
```
288+
290289
### Windows PowerShell
291-
```shell
290+
291+
```powershell
292292
$Env:KUBECONFIG=("config-demo;config-demo-2")
293293
```
294294

@@ -303,7 +303,7 @@ environment variable. In particular, notice that the merged information has the
303303
`dev-ramp-up` context from the `config-demo-2` file and the three contexts from
304304
the `config-demo` file:
305305

306-
```shell
306+
```yaml
307307
contexts:
308308
- context:
309309
cluster: development
@@ -347,11 +347,14 @@ If you have a `$HOME/.kube/config` file, and it's not already listed in your
347347
For example:
348348

349349
### Linux
350+
350351
```shell
351352
export KUBECONFIG=$KUBECONFIG:$HOME/.kube/config
352353
```
354+
353355
### Windows Powershell
354-
```shell
356+
357+
```powershell
355358
$Env:KUBECONFIG="$Env:KUBECONFIG;$HOME\.kube\config"
356359
```
357360

@@ -367,23 +370,19 @@ kubectl config view
367370
Return your `KUBECONFIG` environment variable to its original value. For example:<br>
368371

369372
### Linux
373+
370374
```shell
371375
export KUBECONFIG=$KUBECONFIG_SAVED
372376
```
377+
373378
### Windows PowerShell
374-
```shell
379+
380+
```powershell
375381
$Env:KUBECONFIG=$ENV:KUBECONFIG_SAVED
376382
```
377383

378-
379-
380384
## {{% heading "whatsnext" %}}
381385

382-
383386
* [Organizing Cluster Access Using kubeconfig Files](/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
384387
* [kubectl config](/docs/reference/generated/kubectl/kubectl-commands#config)
385388

386-
387-
388-
389-

content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -78,3 +78,4 @@ telemetry agents on the node, make sure to check with the vendor of the agent wh
7878
We keep the work in progress version of migration instructions for various telemetry and security agent vendors
7979
in [Google doc](https://docs.google.com/document/d/1ZFi4uKit63ga5sxEiZblfb-c23lFhvy6RXVPikS8wf0/edit#).
8080
Please contact the vendor to get up to date instructions for migrating from dockershim.
81+

content/en/docs/tasks/administer-cluster/reserve-compute-resources.md

Lines changed: 29 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -17,33 +17,27 @@ itself. Unless resources are set aside for these system daemons, pods and system
1717
daemons compete for resources and lead to resource starvation issues on the
1818
node.
1919

20-
The `kubelet` exposes a feature named `Node Allocatable` that helps to reserve
20+
The `kubelet` exposes a feature named 'Node Allocatable' that helps to reserve
2121
compute resources for system daemons. Kubernetes recommends cluster
22-
administrators to configure `Node Allocatable` based on their workload density
22+
administrators to configure 'Node Allocatable' based on their workload density
2323
on each node.
2424

25-
26-
27-
2825
## {{% heading "prerequisites" %}}
2926

30-
3127
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
3228
Your Kubernetes server must be at or later than version 1.17 to use
3329
the kubelet command line option `--reserved-cpus` to set an
3430
[explicitly reserved CPU list](#explicitly-reserved-cpu-list).
3531

36-
37-
3832
<!-- steps -->
3933

4034
## Node Allocatable
4135

4236
![node capacity](/images/docs/node-capacity.svg)
4337

44-
`Allocatable` on a Kubernetes node is defined as the amount of compute resources
38+
'Allocatable' on a Kubernetes node is defined as the amount of compute resources
4539
that are available for pods. The scheduler does not over-subscribe
46-
`Allocatable`. `CPU`, `memory` and `ephemeral-storage` are supported as of now.
40+
'Allocatable'. 'CPU', 'memory' and 'ephemeral-storage' are supported as of now.
4741

4842
Node Allocatable is exposed as part of `v1.Node` object in the API and as part
4943
of `kubectl describe node` in the CLI.
@@ -97,8 +91,7 @@ flag.
9791
It is recommended that the kubernetes system daemons are placed under a top
9892
level control group (`runtime.slice` on systemd machines for example). Each
9993
system daemon should ideally run within its own child control group. Refer to
100-
[this
101-
doc](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md#recommended-cgroups-setup)
94+
[the design proposal](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md#recommended-cgroups-setup)
10295
for more details on recommended control group hierarchy.
10396

10497
Note that Kubelet **does not** create `--kube-reserved-cgroup` if it doesn't
@@ -109,7 +102,6 @@ exist. Kubelet will fail if an invalid cgroup is specified.
109102
- **Kubelet Flag**: `--system-reserved=[cpu=100m][,][memory=100Mi][,][ephemeral-storage=1Gi][,][pid=1000]`
110103
- **Kubelet Flag**: `--system-reserved-cgroup=`
111104

112-
113105
`system-reserved` is meant to capture resource reservation for OS system daemons
114106
like `sshd`, `udev`, etc. `system-reserved` should reserve `memory` for the
115107
`kernel` too since `kernel` memory is not accounted to pods in Kubernetes at this time.
@@ -127,13 +119,14 @@ kubelet flag.
127119
It is recommended that the OS system daemons are placed under a top level
128120
control group (`system.slice` on systemd machines for example).
129121

130-
Note that Kubelet **does not** create `--system-reserved-cgroup` if it doesn't
131-
exist. Kubelet will fail if an invalid cgroup is specified.
122+
Note that `kubelet` **does not** create `--system-reserved-cgroup` if it doesn't
123+
exist. `kubelet` will fail if an invalid cgroup is specified.
132124

133125
### Explicitly Reserved CPU List
126+
134127
{{< feature-state for_k8s_version="v1.17" state="stable" >}}
135128

136-
- **Kubelet Flag**: `--reserved-cpus=0-3`
129+
**Kubelet Flag**: `--reserved-cpus=0-3`
137130

138131
`reserved-cpus` is meant to define an explicit CPU set for OS system daemons and
139132
kubernetes system daemons. `reserved-cpus` is for systems that do not intend to
@@ -154,32 +147,33 @@ For example: in Centos, you can do this using the tuned toolset.
154147

155148
### Eviction Thresholds
156149

157-
- **Kubelet Flag**: `--eviction-hard=[memory.available<500Mi]`
150+
**Kubelet Flag**: `--eviction-hard=[memory.available<500Mi]`
158151

159152
Memory pressure at the node level leads to System OOMs which affects the entire
160153
node and all pods running on it. Nodes can go offline temporarily until memory
161154
has been reclaimed. To avoid (or reduce the probability of) system OOMs kubelet
162-
provides [`Out of Resource`](/docs/tasks/administer-cluster/out-of-resource/) management. Evictions are
155+
provides [out of resource](/docs/concepts/scheduling-eviction/node-pressure-eviction/)
156+
management. Evictions are
163157
supported for `memory` and `ephemeral-storage` only. By reserving some memory via
164-
`--eviction-hard` flag, the `kubelet` attempts to `evict` pods whenever memory
158+
`--eviction-hard` flag, the `kubelet` attempts to evict pods whenever memory
165159
availability on the node drops below the reserved value. Hypothetically, if
166160
system daemons did not exist on a node, pods cannot use more than `capacity -
167161
eviction-hard`. For this reason, resources reserved for evictions are not
168162
available for pods.
169163

170164
### Enforcing Node Allocatable
171165

172-
- **Kubelet Flag**: `--enforce-node-allocatable=pods[,][system-reserved][,][kube-reserved]`
166+
**Kubelet Flag**: `--enforce-node-allocatable=pods[,][system-reserved][,][kube-reserved]`
173167

174-
The scheduler treats `Allocatable` as the available `capacity` for pods.
168+
The scheduler treats 'Allocatable' as the available `capacity` for pods.
175169

176-
`kubelet` enforce `Allocatable` across pods by default. Enforcement is performed
170+
`kubelet` enforce 'Allocatable' across pods by default. Enforcement is performed
177171
by evicting pods whenever the overall usage across all pods exceeds
178-
`Allocatable`. More details on eviction policy can be found
179-
[here](/docs/tasks/administer-cluster/out-of-resource/#eviction-policy). This enforcement is controlled by
172+
'Allocatable'. More details on eviction policy can be found
173+
on the [node pressure eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/)
174+
page. This enforcement is controlled by
180175
specifying `pods` value to the kubelet flag `--enforce-node-allocatable`.
181176

182-
183177
Optionally, `kubelet` can be made to enforce `kube-reserved` and
184178
`system-reserved` by specifying `kube-reserved` & `system-reserved` values in
185179
the same flag. Note that to enforce `kube-reserved` or `system-reserved`,
@@ -188,10 +182,10 @@ respectively.
188182

189183
## General Guidelines
190184

191-
System daemons are expected to be treated similar to `Guaranteed` pods. System
185+
System daemons are expected to be treated similar to 'Guaranteed' pods. System
192186
daemons can burst within their bounding control groups and this behavior needs
193187
to be managed as part of kubernetes deployments. For example, `kubelet` should
194-
have its own control group and share `Kube-reserved` resources with the
188+
have its own control group and share `kube-reserved` resources with the
195189
container runtime. However, Kubelet cannot burst and use up all available Node
196190
resources if `kube-reserved` is enforced.
197191

@@ -200,9 +194,9 @@ to critical system services being CPU starved, OOM killed, or unable
200194
to fork on the node. The
201195
recommendation is to enforce `system-reserved` only if a user has profiled their
202196
nodes exhaustively to come up with precise estimates and is confident in their
203-
ability to recover if any process in that group is oom_killed.
197+
ability to recover if any process in that group is oom-killed.
204198

205-
* To begin with enforce `Allocatable` on `pods`.
199+
* To begin with enforce 'Allocatable' on `pods`.
206200
* Once adequate monitoring and alerting is in place to track kube system
207201
daemons, attempt to enforce `kube-reserved` based on usage heuristics.
208202
* If absolutely necessary, enforce `system-reserved` over time.
@@ -212,8 +206,6 @@ more features are added. Over time, kubernetes project will attempt to bring
212206
down utilization of node system daemons, but that is not a priority as of now.
213207
So expect a drop in `Allocatable` capacity in future releases.
214208

215-
216-
217209
<!-- discussion -->
218210

219211
## Example Scenario
@@ -225,15 +217,15 @@ Here is an example to illustrate Node Allocatable computation:
225217
* `--system-reserved` is set to `cpu=500m,memory=1Gi,ephemeral-storage=1Gi`
226218
* `--eviction-hard` is set to `memory.available<500Mi,nodefs.available<10%`
227219

228-
Under this scenario, `Allocatable` will be `14.5 CPUs`, `28.5Gi` of memory and
220+
Under this scenario, 'Allocatable' will be 14.5 CPUs, 28.5Gi of memory and
229221
`88Gi` of local storage.
230222
Scheduler ensures that the total memory `requests` across all pods on this node does
231-
not exceed `28.5Gi` and storage doesn't exceed `88Gi`.
232-
Kubelet evicts pods whenever the overall memory usage across pods exceeds `28.5Gi`,
233-
or if overall disk usage exceeds `88Gi` If all processes on the node consume as
234-
much CPU as they can, pods together cannot consume more than `14.5 CPUs`.
223+
not exceed 28.5Gi and storage doesn't exceed 88Gi.
224+
Kubelet evicts pods whenever the overall memory usage across pods exceeds 28.5Gi,
225+
or if overall disk usage exceeds 88Gi If all processes on the node consume as
226+
much CPU as they can, pods together cannot consume more than 14.5 CPUs.
235227

236228
If `kube-reserved` and/or `system-reserved` is not enforced and system daemons
237229
exceed their reservation, `kubelet` evicts pods whenever the overall node memory
238-
usage is higher than `31.5Gi` or `storage` is greater than `90Gi`
230+
usage is higher than 31.5Gi or `storage` is greater than 90Gi.
239231

content/en/docs/tasks/manage-daemon/update-daemon-set.md

Lines changed: 12 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -7,35 +7,36 @@ weight: 10
77
---
88

99
<!-- overview -->
10-
1110
This page shows how to perform a rolling update on a DaemonSet.
1211

1312
## {{% heading "prerequisites" %}}
1413

15-
* The DaemonSet rolling update feature is only supported in Kubernetes version 1.6 or later.
16-
1714
<!-- steps -->
1815

1916
## DaemonSet Update Strategy
2017

2118
DaemonSet has two update strategy types:
2219

23-
* OnDelete: With `OnDelete` update strategy, after you update a DaemonSet template, new
20+
* `OnDelete`: With `OnDelete` update strategy, after you update a DaemonSet template, new
2421
DaemonSet pods will *only* be created when you manually delete old DaemonSet
2522
pods. This is the same behavior of DaemonSet in Kubernetes version 1.5 or
2623
before.
27-
* RollingUpdate: This is the default update strategy.
24+
* `RollingUpdate`: This is the default update strategy.
2825
With `RollingUpdate` update strategy, after you update a
2926
DaemonSet template, old DaemonSet pods will be killed, and new DaemonSet pods
30-
will be created automatically, in a controlled fashion. At most one pod of the DaemonSet will be running on each node during the whole update process.
27+
will be created automatically, in a controlled fashion. At most one pod of
28+
the DaemonSet will be running on each node during the whole update process.
3129

3230
## Performing a Rolling Update
3331

3432
To enable the rolling update feature of a DaemonSet, you must set its
3533
`.spec.updateStrategy.type` to `RollingUpdate`.
3634

37-
You may want to set [`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/docs/concepts/workloads/controllers/deployment/#max-unavailable) (default
38-
to 1) and [`.spec.minReadySeconds`](/docs/concepts/workloads/controllers/deployment/#min-ready-seconds) (default to 0) as well.
35+
You may want to set
36+
[`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/docs/concepts/workloads/controllers/deployment/#max-unavailable)
37+
(default to 1) and
38+
[`.spec.minReadySeconds`](/docs/concepts/workloads/controllers/deployment/#min-ready-seconds)
39+
(default to 0) as well.
3940

4041
### Creating a DaemonSet with `RollingUpdate` update strategy
4142

@@ -143,7 +144,7 @@ causes:
143144

144145
The rollout is stuck because new DaemonSet pods can't be scheduled on at least one
145146
node. This is possible when the node is
146-
[running out of resources](/docs/tasks/administer-cluster/out-of-resource/).
147+
[running out of resources](/docs/concepts/scheduling-eviction/node-pressure-eviction/).
147148

148149
When this happens, find the nodes that don't have the DaemonSet pods scheduled on
149150
by comparing the output of `kubectl get nodes` and the output of:
@@ -184,14 +185,8 @@ Delete DaemonSet from a namespace :
184185
kubectl delete ds fluentd-elasticsearch -n kube-system
185186
```
186187

187-
188-
189-
190188
## {{% heading "whatsnext" %}}
191189

192-
193-
* See [Task: Performing a rollback on a
194-
DaemonSet](/docs/tasks/manage-daemon/rollback-daemon-set/)
195-
* See [Concepts: Creating a DaemonSet to adopt existing DaemonSet pods](/docs/concepts/workloads/controllers/daemonset/)
196-
190+
* See [Performing a rollback on a DaemonSet](/docs/tasks/manage-daemon/rollback-daemon-set/)
191+
* See [Creating a DaemonSet to adopt existing DaemonSet pods](/docs/concepts/workloads/controllers/daemonset/)
197192

0 commit comments

Comments
 (0)