Skip to content

Commit b96fe92

Browse files
committed
Fix markdown lint errors
Signed-off-by: Anja Strunk <[email protected]>
1 parent 9efbaeb commit b96fe92

File tree

2 files changed

+18
-7
lines changed

2 files changed

+18
-7
lines changed

Standards/scs-0214-v1-k8s-node-distribution.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -83,5 +83,4 @@ If the standard is used by a provider, the following decisions are binding and v
8383

8484
[k8s-ha]: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
8585
[k8s-large-clusters]: https://kubernetes.io/docs/setup/best-practices/cluster-large/
86-
[scs-0213-v1]: https://github.com/SovereignCloudStack/standards/blob/main/Standards/scs-0213-v1-k8s-nodes-anti-affinity.md
87-
[k8s-labels-docs]: https://kubernetes.io/docs/reference/labels-annotations-taints/#topologykubernetesiozone
86+
[scs-0213-v1]: https://github.com/SovereignCloudStack/standards/blob/main/Standards/scs-0213-v1-k8s-nodes-anti-affinity.md

Standards/scs-0214-v2-k8s-node-distribution.md

Lines changed: 17 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -41,15 +41,15 @@ of the whole cluster.
4141
## Design Considerations
4242

4343
Most design considerations of this standard follow the previously written Decision Record
44-
[Kubernetes Nodes Anti Affinity](https://github.com/SovereignCloudStack/standards/blob/main/Standards/scs-0213-v1-k8s-nodes-anti-affinity.md) as well as the Kubernetes documents about
45-
[High Availability](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/) and [Best practices for large clusters](https://kubernetes.io/docs/setup/best-practices/cluster-large/).
44+
[Kubernetes Nodes Anti Affinity][scs-0213-v1] as well as the Kubernetes documents about
45+
[High Availability][k8s-ha] and [Best practices for large clusters][k8s-large-clusters].
4646

4747
SCS wishes to prefer distributed, highly-available systems due to their obvious advantages
4848
like fault-tolerance and data redundancy. But it also understands the costs and overhead
4949
for the providers associated with this effort, since the infrastructure needs to have
5050
hardware which will just be used to provide fail-over safety or duplication.
5151

52-
The document [Best practices for large clusters](https://kubernetes.io/docs/setup/best-practices/cluster-large/) describes the concept of a failure zone.
52+
The document [Best practices for large clusters][k8s-large-clusters] describes the concept of a failure zone.
5353
This term isn't defined any further, but can in this context be described as a number of
5454
physical (computing) machines in such a vicinity to each other (either through physical
5555
or logical interconnection in some way), that specific problems inside this zone would put
@@ -67,7 +67,7 @@ This standard formulates the requirement for the distribution of Kubernetes node
6767
to provide a fault-tolerant and available Kubernetes cluster infrastructure.
6868

6969
The control plane nodes MUST be distributed over multiple physical machines.
70-
Kubernetes provides [best-practices](https://kubernetes.io/docs/setup/best-practices/multiple-zones/) on this topic, which are also RECOMMENDED by SCS.
70+
Kubernetes provides [best-practices][k8s-zones] on this topic, which are also RECOMMENDED by SCS.
7171

7272
At least one control plane instance MUST be run in each "failure zone" used for the cluster,
7373
more instances per "failure zone" are possible to provide fault-tolerance inside a zone.
@@ -83,10 +83,16 @@ These labels MUST be kept up to date with the current state of the deployment.
8383

8484
- `topology.kubernetes.io/zone`
8585

86+
Corresponds with the label described in [K8s labels documentation][k8s-labels-docs].
87+
It provides a logical zone of failure on the side of the provider, e.g. a server rack
88+
in the same electrical circuit or multiple machines bound to the internet through a
89+
singular network structure. How this is defined exactly is up to the plans of the provider.
90+
The field gets autopopulated most of the time by either the kubelet or external mechanisms
91+
like the cloud controller.
8692

8793
- `topology.kubernetes.io/region`
8894

89-
Corresponds with the label described in [K8s labels documentation](https://kubernetes.io/docs/reference/labels-annotations-taints/#topologykubernetesiozone).
95+
Corresponds with the label described in [K8s labels documentation][k8s-labels-docs].
9096
It describes the combination of one or more failure zones into a region or domain, therefore
9197
showing a larger entity of logical failure zone. An example for this could be a building
9298
containing racks that are put into such a zone, since they're all prone to failure, if e.g.
@@ -115,3 +121,9 @@ It also produces warnings and informational outputs, e.g., if labels don't seem
115121

116122
This is version 2 of the standard; it extends [version 1](scs-0214-v1-k8s-node-distribution.md) with the
117123
requirements regarding node labeling.
124+
125+
[k8s-ha]: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
126+
[k8s-large-clusters]: https://kubernetes.io/docs/setup/best-practices/cluster-large/
127+
[scs-0213-v1]: https://github.com/SovereignCloudStack/standards/blob/main/Standards/scs-0213-v1-k8s-nodes-anti-affinity.md
128+
[k8s-labels-docs]: https://kubernetes.io/docs/reference/labels-annotations-taints/#topologykubernetesiozone
129+
[k8s-zones]: https://kubernetes.io/docs/setup/best-practices/multiple-zones/

0 commit comments

Comments
 (0)