Skip to content

Commit f2d6e3c

Browse files
authored
Revert any non-editorial changes to scs-0214-v1 that happened after stabilization (#834)
Signed-off-by: Matthias Büchse <[email protected]>
1 parent 3274fff commit f2d6e3c

File tree

1 file changed

+0
-36
lines changed

1 file changed

+0
-36
lines changed

Standards/scs-0214-v1-k8s-node-distribution.md

Lines changed: 0 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -80,42 +80,6 @@ If the standard is used by a provider, the following decisions are binding and v
8080
can also be scaled vertically first before scaling horizontally.
8181
- Worker node distribution MUST be indicated to the user through some kind of labeling
8282
in order to enable (anti)-affinity for workloads over "failure zones".
83-
- To provide metadata about the node distribution, which also enables testing of this standard,
84-
providers MUST label their K8s nodes with the labels listed below.
85-
- `topology.kubernetes.io/zone`
86-
87-
Corresponds with the label described in [K8s labels documentation][k8s-labels-docs].
88-
It provides a logical zone of failure on the side of the provider, e.g. a server rack
89-
in the same electrical circuit or multiple machines bound to the internet through a
90-
singular network structure. How this is defined exactly is up to the plans of the provider.
91-
The field gets autopopulated most of the time by either the kubelet or external mechanisms
92-
like the cloud controller.
93-
94-
- `topology.kubernetes.io/region`
95-
96-
Corresponds with the label described in [K8s labels documentation][k8s-labels-docs].
97-
It describes the combination of one or more failure zones into a region or domain, therefore
98-
showing a larger entity of logical failure zone. An example for this could be a building
99-
containing racks that are put into such a zone, since they're all prone to failure, if e.g.
100-
the power for the building is cut. How this is defined exactly is also up to the provider.
101-
The field gets autopopulated most of the time by either the kubelet or external mechanisms
102-
like the cloud controller.
103-
104-
- `topology.scs.community/host-id`
105-
106-
This is an SCS-specific label; it MUST contain the hostID of the physical machine running
107-
the hypervisor (NOT: the hostID of a virtual machine). Here, the hostID is an arbitrary identifier,
108-
which need not contain the actual hostname, but it should nonetheless be unique to the host.
109-
This helps identify the distribution over underlying physical machines,
110-
which would be masked if VM hostIDs were used.
111-
112-
## Conformance Tests
113-
114-
The script `k8s-node-distribution-check.py` checks the nodes available with a user-provided
115-
kubeconfig file. It then determines based on the labels `kubernetes.io/hostname`, `topology.kubernetes.io/zone`,
116-
`topology.kubernetes.io/region` and `node-role.kubernetes.io/control-plane`, if a distribution
117-
of the available nodes is present. If this isn't the case, the script produces an error.
118-
If also produces warnings and informational outputs, if e.g. labels don't seem to be set.
11983

12084
[k8s-ha]: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
12185
[k8s-large-clusters]: https://kubernetes.io/docs/setup/best-practices/cluster-large/

0 commit comments

Comments
 (0)