Skip to content

Commit e0ae133

Browse files
committed
Correct links
Signed-off-by: Celeste Horgan <[email protected]>
1 parent dfeed45 commit e0ae133

File tree

15 files changed

+53
-58
lines changed

15 files changed

+53
-58
lines changed

content/en/docs/concepts/cluster-administration/flow-control.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -131,14 +131,14 @@ classes:
131131
namespace). These are important to isolate from other traffic because failures
132132
in leader election cause their controllers to fail and restart, which in turn
133133
causes more expensive traffic as the new controllers sync their informers.
134-
134+
135135
* The `workload-high` priority level is for other requests from built-in
136136
controllers.
137-
137+
138138
* The `workload-low` priority level is for requests from any other service
139139
account, which will typically include all requests from controllers runing in
140140
Pods.
141-
141+
142142
* The `global-default` priority level handles all other traffic, e.g.
143143
interactive `kubectl` commands run by nonprivileged users.
144144

@@ -150,7 +150,7 @@ are built in and may not be overwritten:
150150
special `exempt` FlowSchema classifies all requests from the `system:masters`
151151
group into this priority level. You may define other FlowSchemas that direct
152152
other requests to this priority level, if appropriate.
153-
153+
154154
* The special `catch-all` priority level is used in combination with the special
155155
`catch-all` FlowSchema to make sure that every request gets some kind of
156156
classification. Typically you should not rely on this catch-all configuration,
@@ -164,7 +164,7 @@ are built in and may not be overwritten:
164164

165165
## Resources
166166
The flow control API involves two kinds of resources.
167-
[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1alpha1-flowcontrol-apiserver-k8s-io)
167+
[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1alpha1-flowcontrol-apiserver-k8s-io)
168168
define the available isolation classes, the share of the available concurrency
169169
budget that each can handle, and allow for fine-tuning queuing behavior.
170170
[FlowSchemas](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#flowschema-v1alpha1-flowcontrol-apiserver-k8s-io)
@@ -204,7 +204,7 @@ to balance progress between request flows.
204204

205205
The queuing configuration allows tuning the fair queuing algorithm for a
206206
priority level. Details of the algorithm can be read in the [enhancement
207-
proposal](#what-s-next), but in short:
207+
proposal](#whats-next), but in short:
208208

209209
* Increasing `queues` reduces the rate of collisions between different flows, at
210210
the cost of increased memory usage. A value of 1 here effectively disables the
@@ -291,7 +291,7 @@ enabled has two extra headers: `X-Kubernetes-PF-FlowSchema-UID` and
291291
`X-Kubernetes-PF-PriorityLevel-UID`, noting the flow schema that matched the request
292292
and the priority level to which it was assigned, respectively. The API objects'
293293
names are not included in these headers in case the requesting user does not
294-
have permission to view them, so when debugging you can use a command like
294+
have permission to view them, so when debugging you can use a command like
295295

296296
```shell
297297
kubectl get flowschemas -o custom-columns="uid:{metadata.uid},name:{metadata.name}"
@@ -363,7 +363,7 @@ poorly-behaved workloads that may be harming system health.
363363
* `apiserver_flowcontrol_request_execution_seconds` gives a histogram of how
364364
long requests took to actually execute, grouped by the FlowSchema that matched the
365365
request and the PriorityLevel to which it was assigned.
366-
366+
367367

368368
{{% /capture %}}
369369

@@ -374,4 +374,4 @@ the [enhancement proposal](https://github.com/kubernetes/enhancements/blob/maste
374374
You can make suggestions and feature requests via [SIG API
375375
Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery).
376376

377-
{{% /capture %}}
377+
{{% /capture %}}

content/en/docs/concepts/extend-kubernetes/operator.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ as well as keeping the existing service in good shape.
106106
## Writing your own Operator {#writing-operator}
107107

108108
If there isn't an Operator in the ecosystem that implements the behavior you
109-
want, you can code your own. In [What's next](#what-s-next) you'll find a few
109+
want, you can code your own. In [What's next](#whats-next) you'll find a few
110110
links to libraries and tools you can use to write your own cloud native
111111
Operator.
112112

@@ -129,4 +129,4 @@ that can act as a [client for the Kubernetes API](/docs/reference/using-api/clie
129129
* Read [CoreOS' original article](https://coreos.com/blog/introducing-operators.html) that introduced the Operator pattern
130130
* Read an [article](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) from Google Cloud about best practices for building Operators
131131

132-
{{% /capture %}}
132+
{{% /capture %}}

content/en/docs/concepts/scheduling-eviction/scheduling-framework.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -157,13 +157,13 @@ the three things:
157157
1. **wait** (with a timeout) \
158158
If a Permit plugin returns "wait", then the Pod is kept in an internal "waiting"
159159
Pods list, and the binding cycle of this Pod starts but directly blocks until it
160-
gets [approved](#frameworkhandle). If a timeout occurs, **wait** becomes **deny**
160+
gets approved. If a timeout occurs, **wait** becomes **deny**
161161
and the Pod is returned to the scheduling queue, triggering [Unreserve](#unreserve)
162162
plugins.
163163

164164
{{< note >}}
165165
While any plugin can access the list of "waiting" Pods and approve them
166-
(see [`FrameworkHandle`](#frameworkhandle)), we expect only the permit
166+
(see [`FrameworkHandle`](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20180409-scheduling-framework.md#frameworkhandle)), we expect only the permit
167167
plugins to approve binding of reserved Pods that are in "waiting" state. Once a Pod
168168
is approved, it is sent to the [PreBind](#pre-bind) phase.
169169
{{< /note >}}
@@ -239,4 +239,4 @@ If you are using Kubernetes v1.18 or later, you can configure a set of plugins a
239239
a scheduler profile and then define multiple profiles to fit various kinds of workload.
240240
Learn more at [multiple profiles](/docs/reference/scheduling/profiles/#multiple-profiles).
241241

242-
{{% /capture %}}
242+
{{% /capture %}}

content/en/docs/concepts/services-networking/dns-pod-service.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@ following pod-specific DNS policies. These policies are specified in the
171171
- "`None`": It allows a Pod to ignore DNS settings from the Kubernetes
172172
environment. All DNS settings are supposed to be provided using the
173173
`dnsConfig` field in the Pod Spec.
174-
See [Pod's DNS config](#pod-s-dns-config) subsection below.
174+
See [Pod's DNS config](#pod-dns-config) subsection below.
175175

176176
{{< note >}}
177177
"Default" is not the default DNS policy. If `dnsPolicy` is not
@@ -201,7 +201,7 @@ spec:
201201
dnsPolicy: ClusterFirstWithHostNet
202202
```
203203

204-
### Pod's DNS Config
204+
### Pod's DNS Config {#pod-dns-config}
205205

206206
Pod's DNS Config allows users more control on the DNS settings for a Pod.
207207

@@ -269,6 +269,4 @@ The availability of Pod DNS Config and DNS Policy "`None`"" is shown as below.
269269
For guidance on administering DNS configurations, check
270270
[Configure DNS Service](/docs/tasks/administer-cluster/dns-custom-nameservers/)
271271

272-
{{% /capture %}}
273-
274-
272+
{{% /capture %}}

content/en/docs/concepts/storage/persistent-volumes.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -736,9 +736,9 @@ and need persistent storage, it is recommended that you use the following patter
736736
`persistentVolumeClaim.storageClassName` field.
737737
This will cause the PVC to match the right storage
738738
class if the cluster has StorageClasses enabled by the admin.
739-
- If the user does not provide a storage class name, leave the
740-
`persistentVolumeClaim.storageClassName` field as nil. This will cause a
741-
PV to be automatically provisioned for the user with the default StorageClass
739+
- If the user does not provide a storage class name, leave the
740+
`persistentVolumeClaim.storageClassName` field as nil. This will cause a
741+
PV to be automatically provisioned for the user with the default StorageClass
742742
in the cluster. Many cluster environments have a default StorageClass installed,
743743
or administrators can create their own default StorageClass.
744744
- In your tooling, watch for PVCs that are not getting bound after some time
@@ -759,4 +759,4 @@ and need persistent storage, it is recommended that you use the following patter
759759
* [PersistentVolumeSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumespec-v1-core)
760760
* [PersistentVolumeClaim](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaim-v1-core)
761761
* [PersistentVolumeClaimSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaimspec-v1-core)
762-
{{% /capture %}}
762+
{{% /capture %}}

content/en/docs/concepts/workloads/pods/init-containers.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -241,7 +241,7 @@ myapp-pod 1/1 Running 0 9m
241241
```
242242

243243
This simple example should provide some inspiration for you to create your own
244-
init containers. [What's next](#what-s-next) contains a link to a more detailed example.
244+
init containers. [What's next](#whats-next) contains a link to a more detailed example.
245245

246246
## Detailed behavior
247247

@@ -325,4 +325,4 @@ reasons:
325325
* Read about [creating a Pod that has an init container](/docs/tasks/configure-pod-container/configure-pod-initialization/#creating-a-pod-that-has-an-init-container)
326326
* Learn how to [debug init containers](/docs/tasks/debug-application-cluster/debug-init-containers/)
327327

328-
{{% /capture %}}
328+
{{% /capture %}}

content/en/docs/contribute/review/for-approvers.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,8 +73,7 @@ true:
7373
[Prow](https://github.com/kubernetes/test-infra/blob/master/prow/README.md) is
7474
the Kubernetes-based CI/CD system that runs jobs against pull requests (PRs). Prow
7575
enables chatbot-style commands to handle GitHub actions across the Kubernetes
76-
organization, like [adding and removing
77-
labels](#add-and-remove-labels), closing issues, and assigning an approver. Enter Prow commands as GitHub comments using the `/<command-name>` format.
76+
organization, like [adding and removing labels](#adding-and-removing-issue-labels), closing issues, and assigning an approver. Enter Prow commands as GitHub comments using the `/<command-name>` format.
7877

7978
The most common prow commands reviewers and approvers use are:
8079

content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -28,15 +28,15 @@ For information how to create a cluster with kubeadm once you have performed thi
2828
* 2 GB or more of RAM per machine (any less will leave little room for your apps)
2929
* 2 CPUs or more
3030
* Full network connectivity between all machines in the cluster (public or private network is fine)
31-
* Unique hostname, MAC address, and product_uuid for every node. See [here](#verify-the-mac-address-and-product-uuid-are-unique-for-every-node) for more details.
31+
* Unique hostname, MAC address, and product_uuid for every node. See [here](#verify-mac-address) for more details.
3232
* Certain ports are open on your machines. See [here](#check-required-ports) for more details.
3333
* Swap disabled. You **MUST** disable swap in order for the kubelet to work properly.
3434

3535
{{% /capture %}}
3636

3737
{{% capture steps %}}
3838

39-
## Verify the MAC address and product_uuid are unique for every node
39+
## Verify the MAC address and product_uuid are unique for every node {#verify-mac-address}
4040

4141
* You can get the MAC address of the network interfaces using the command `ip link` or `ifconfig -a`
4242
* The product_uuid can be checked by using the command `sudo cat /sys/class/dmi/id/product_uuid`
@@ -305,4 +305,4 @@ If you are running into difficulties with kubeadm, please consult our [troublesh
305305

306306
* [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)
307307

308-
{{% /capture %}}
308+
{{% /capture %}}

content/en/docs/setup/production-environment/tools/kubespray.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [in
2121
* openSUSE Leap 15
2222
* continuous integration tests
2323

24-
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](../kops).
24+
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).
2525

2626
{{% /capture %}}
2727

@@ -119,4 +119,4 @@ When running the reset playbook, be sure not to accidentally target your product
119119

120120
Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/roadmap.md).
121121

122-
{{% /capture %}}
122+
{{% /capture %}}

content/en/docs/tasks/access-application-cluster/access-cluster.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -277,7 +277,7 @@ This shows the proxy-verb URL for accessing each service.
277277
For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached
278278
at `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/` if suitable credentials are passed. Logging can also be reached through a kubectl proxy, for example at:
279279
`http://localhost:8080/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/`.
280-
(See [above](#accessing-the-cluster-api) for how to pass credentials or use kubectl proxy.)
280+
(See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/access-cluster-api/) for how to pass credentials or use kubectl proxy.)
281281

282282
#### Manually constructing apiserver proxy URLs
283283

@@ -376,4 +376,4 @@ There are several different proxies you may encounter when using Kubernetes:
376376
Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
377377
will typically ensure that the latter types are setup correctly.
378378

379-
{{% /capture %}}
379+
{{% /capture %}}

0 commit comments

Comments
 (0)