Skip to content

Commit c4add10

Browse files
committed
Replace redirected links with the real targets
For some links that are invalid forever, this PR drops them.
1 parent 1943aaa commit c4add10

File tree

13 files changed

+43
-70
lines changed

13 files changed

+43
-70
lines changed

content/en/docs/setup/best-practices/cluster-large.md

Lines changed: 12 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ At {{< param "version" >}}, Kubernetes supports clusters with up to 5000 nodes.
2020

2121
A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane).
2222

23-
Normally the number of nodes in a cluster is controlled by the value `NUM_NODES` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/gce/config-default.sh)).
23+
Normally the number of nodes in a cluster is controlled by the value `NUM_NODES` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/gce/config-default.sh)).
2424

2525
Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
2626

@@ -80,7 +80,7 @@ On AWS, master node sizes are currently set at cluster startup time and do not c
8080

8181
### Addon Resources
8282

83-
To prevent memory leaks or other resource issues in [cluster addons](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](http://pr.k8s.io/10653/files) and [#10778](http://pr.k8s.io/10778/files)).
83+
To prevent memory leaks or other resource issues in [cluster addons](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](https://pr.k8s.io/10653/files) and [#10778](https://pr.k8s.io/10778/files)).
8484

8585
For example:
8686

@@ -94,28 +94,26 @@ For example:
9494
memory: 200Mi
9595
```
9696
97-
Except for Heapster, these limits are static and are based on data we collected from addons running on 4-node clusters (see [#10335](http://issue.k8s.io/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](http://issue.k8s.io/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits.
97+
Except for Heapster, these limits are static and are based on data we collected from addons running on 4-node clusters (see [#10335](https://issue.k8s.io/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](http://issue.k8s.io/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits.
9898
9999
To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:
100100
101101
* Scale memory and CPU limits for each of the following addons, if used, as you scale up the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster):
102-
* [InfluxDB and Grafana](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
103-
* [kubedns, dnsmasq, and sidecar](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/kube-dns.yaml.in)
104-
* [Kibana](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml)
102+
* [InfluxDB and Grafana](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
103+
* [kubedns, dnsmasq, and sidecar](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/kube-dns.yaml.in)
104+
* [Kibana](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml)
105105
* Scale number of replicas for the following addons, if used, along with the size of cluster (there are multiple replicas of each so increasing replicas should help handle increased load, but, since load per replica also increases slightly, also consider increasing CPU/memory limits):
106-
* [elasticsearch](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml)
106+
* [elasticsearch](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml)
107107
* Increase memory and CPU limits slightly for each of the following addons, if used, along with the size of cluster (there is one replica per node but CPU/memory usage increases slightly along with cluster load/size as well):
108-
* [FluentD with ElasticSearch Plugin](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml)
109-
* [FluentD with GCP Plugin](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-gcp/fluentd-gcp-ds.yaml)
108+
* [FluentD with ElasticSearch Plugin](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml)
109+
* [FluentD with GCP Plugin](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-gcp/fluentd-gcp-ds.yaml)
110110
111111
Heapster's resource limits are set dynamically based on the initial size of your cluster (see [#16185](http://issue.k8s.io/16185)
112112
and [#22940](http://issue.k8s.io/22940)). If you find that Heapster is running
113113
out of resources, you should adjust the formulas that compute heapster memory request (see those PRs for details).
114114
115-
For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](/docs/concepts/configuration/manage-compute-resources-container/#troubleshooting).
116-
117-
In the [future](http://issue.k8s.io/13048), we anticipate to set all cluster addon resource limits based on cluster size, and to dynamically adjust them if you grow or shrink your cluster.
118-
We welcome PRs that implement those features.
115+
For directions on how to detect if addon containers are hitting resource limits, see the
116+
[Troubleshooting section of Compute Resources](/docs/concepts/configuration/manage-resources-containers/#troubleshooting).
119117
120118
### Allowing minor node failure at startup
121119
@@ -126,3 +124,4 @@ running `kube-up.sh` set the environment variable `ALLOWED_NOTREADY_NODES` to wh
126124
with. This will allow `kube-up.sh` to succeed with fewer than `NUM_NODES` coming up. Depending on the
127125
reason for the failure, those additional nodes may join later or the cluster may remain at a size of
128126
`NUM_NODES - ALLOWED_NOTREADY_NODES`.
127+

content/en/docs/setup/best-practices/multiple-zones.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ federation support).
7878
a single master node by default. While services are highly
7979
available and can tolerate the loss of a zone, the control plane is
8080
located in a single zone. Users that want a highly available control
81-
plane should follow the [high availability](/docs/admin/high-availability) instructions.
81+
plane should follow the [high availability](/docs/setup/production-environment/tools/kubeadm/high-availability/) instructions.
8282

8383
### Volume limitations
8484
The following limitations are addressed with [topology-aware volume binding](/docs/concepts/storage/storage-classes/#volume-binding-mode).

content/en/docs/setup/learning-environment/minikube.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,7 @@ This brief demo guides you on how to start, use, and delete Minikube locally. Fo
198198

199199
The `minikube start` command can be used to start your cluster.
200200
This command creates and configures a Virtual Machine that runs a single-node Kubernetes cluster.
201-
This command also configures your [kubectl](/docs/user-guide/kubectl-overview/) installation to communicate with this cluster.
201+
This command also configures your [kubectl](/docs/reference/kubectl/overview/) installation to communicate with this cluster.
202202

203203
{{< note >}}
204204
If you are behind a web proxy, you need to pass this information to the `minikube start` command:
@@ -514,6 +514,6 @@ For more information about Minikube, see the [proposal](https://git.k8s.io/commu
514514

515515
## Community
516516

517-
Contributions, questions, and comments are all welcomed and encouraged! Minikube developers hang out on [Slack](https://kubernetes.slack.com) in the #minikube channel (get an invitation [here](http://slack.kubernetes.io/)). We also have the [kubernetes-dev Google Groups mailing list](https://groups.google.com/forum/#!forum/kubernetes-dev). If you are posting to the list please prefix your subject with "minikube: ".
517+
Contributions, questions, and comments are all welcomed and encouraged! Minikube developers hang out on [Slack](https://kubernetes.slack.com) in the `#minikube` channel (get an invitation [here](https://slack.kubernetes.io/)). We also have the [kubernetes-dev Google Groups mailing list](https://groups.google.com/forum/#!forum/kubernetes-dev). If you are posting to the list please prefix your subject with "minikube: ".
518518

519519

content/en/docs/setup/production-environment/on-premises-vm/cloudstack.md

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,10 @@ content_type: concept
99

1010
[CloudStack](https://cloudstack.apache.org/) is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes.
1111

12-
[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.
12+
[CoreOS](https://coreos.com) templates for CloudStack are built [nightly](https://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](https://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.
1313

1414
This guide uses a single [Ansible playbook](https://github.com/apachecloudstack/k8s), which is completely automated and can deploy Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init.
1515

16-
17-
1816
<!-- body -->
1917

2018
## Prerequisites
@@ -112,10 +110,7 @@ e9af8293... <node #2 IP> role=node
112110

113111
## Support Level
114112

115-
116113
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
117114
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
118115
CloudStack | Ansible | CoreOS | flannel | [docs](/docs/setup/production-environment/on-premises-vm/cloudstack/) | | Community ([@Guiques](https://github.com/ltupin/))
119116

120-
121-

content/en/docs/setup/production-environment/tools/kops.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ you choose for organization reasons (e.g. you are allowed to create records unde
140140
but not under `example.com`).
141141

142142
Let's assume you're using `dev.example.com` as your hosted zone. You create that hosted zone using
143-
the [normal process](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html), or
143+
the [normal process](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html), or
144144
with a command such as `aws route53 create-hosted-zone --name dev.example.com --caller-reference 1`.
145145

146146
You must then set up your NS records in the parent domain, so that records in the domain will resolve. Here,
@@ -231,9 +231,8 @@ See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to expl
231231
## {{% heading "whatsnext" %}}
232232

233233

234-
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/).
234+
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/overview/).
235235
* Learn more about `kops` [advanced usage](https://kops.sigs.k8s.io/) for tutorials, best practices and advanced configuration options.
236236
* Follow `kops` community discussions on Slack: [community discussions](https://github.com/kubernetes/kops#other-ways-to-communicate-with-the-contributors)
237237
* Contribute to `kops` by addressing or raising an issue [GitHub Issues](https://github.com/kubernetes/kops/issues)
238238

239-

content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -284,7 +284,7 @@ tracker instead of the kubeadm or kubernetes issue trackers.
284284
{{< /note >}}
285285

286286
Several external projects provide Kubernetes Pod networks using CNI, some of which also
287-
support [Network Policy](/docs/concepts/services-networking/networkpolicies/).
287+
support [Network Policy](/docs/concepts/services-networking/network-policies/).
288288

289289
See the list of available
290290
[networking and network policy add-ons](/docs/concepts/cluster-administration/addons/#networking-and-network-policy).
@@ -578,9 +578,9 @@ options.
578578
* <a id="lifecycle" />See [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
579579
for details about upgrading your cluster using `kubeadm`.
580580
* Learn about advanced `kubeadm` usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/kubeadm)
581-
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/).
581+
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/overview/).
582582
* See the [Cluster Networking](/docs/concepts/cluster-administration/networking/) page for a bigger list
583-
of Pod network add-ons.
583+
of Pod network add-ons.
584584
* <a id="other-addons" />See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to
585585
explore other add-ons, including tools for logging, monitoring, network policy, visualization &amp;
586586
control of your Kubernetes cluster.

content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -22,16 +22,14 @@ and environment. [This comparison topic](/docs/setup/production-environment/tool
2222
If you encounter issues with setting up the HA cluster, please provide us with feedback
2323
in the kubeadm [issue tracker](https://github.com/kubernetes/kubeadm/issues/new).
2424

25-
See also [The upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15).
25+
See also [The upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/).
2626

2727
{{< caution >}}
2828
This page does not address running your cluster on a cloud provider. In a cloud
2929
environment, neither approach documented here works with Service objects of type
3030
LoadBalancer, or with dynamic PersistentVolumes.
3131
{{< /caution >}}
3232

33-
34-
3533
## {{% heading "prerequisites" %}}
3634

3735

@@ -51,8 +49,6 @@ For the external etcd cluster only, you also need:
5149

5250
- Three additional machines for etcd members
5351

54-
55-
5652
<!-- steps -->
5753

5854
## First steps for both methods

content/en/docs/setup/production-environment/tools/kubeadm/self-hosting.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,14 +13,12 @@ weight: 100
1313
kubeadm allows you to experimentally create a _self-hosted_ Kubernetes control
1414
plane. This means that key components such as the API server, controller
1515
manager, and scheduler run as [DaemonSet pods](/docs/concepts/workloads/controllers/daemonset/)
16-
configured via the Kubernetes API instead of [static pods](/docs/tasks/administer-cluster/static-pod/)
16+
configured via the Kubernetes API instead of [static pods](/docs/tasks/configure-pod-container/static-pod/)
1717
configured in the kubelet via static files.
1818

1919
To create a self-hosted cluster see the
2020
[kubeadm alpha selfhosting pivot](/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-selfhosting) command.
2121

22-
23-
2422
<!-- body -->
2523

2624
#### Caveats

content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,11 +15,10 @@ If your problem is not listed below, please follow the following steps:
1515
- Go to [github.com/kubernetes/kubeadm](https://github.com/kubernetes/kubeadm/issues) and search for existing issues.
1616
- If no issue exists, please [open one](https://github.com/kubernetes/kubeadm/issues/new) and follow the issue template.
1717

18-
- If you are unsure about how kubeadm works, you can ask on [Slack](http://slack.k8s.io/) in #kubeadm, or open a question on [StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes). Please include
18+
- If you are unsure about how kubeadm works, you can ask on [Slack](https://slack.k8s.io/) in `#kubeadm`,
19+
or open a question on [StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes). Please include
1920
relevant tags like `#kubernetes` and `#kubeadm` so folks can help you.
2021

21-
22-
2322
<!-- body -->
2423

2524
## Not possible to join a v1.18 Node to a v1.17 cluster due to missing RBAC

0 commit comments

Comments
 (0)