Skip to content

Commit f0e8f98

Browse files
committed
[OSDOCS-13450]: Isolation details for HCP
1 parent 262ae25 commit f0e8f98

File tree

2 files changed

+52
-6
lines changed

2 files changed

+52
-6
lines changed

hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,15 @@ toc::[]
88

99
Before you get started with {hcp} for {product-title}, you must properly label nodes so that the pods of hosted clusters can be scheduled into infrastructure nodes. Node labeling is also important for the following reasons:
1010

11-
* To ensure high availability and proper workload deployment. For example, you can set the `node-role.kubernetes.io/infra` label to avoid having the control-plane workload count toward your {product-title} subscription.
11+
* To ensure high availability and proper workload deployment. For example, to avoid having the control plane workload count toward your {product-title} subscription, you can set the `node-role.kubernetes.io/infra` label.
1212
* To ensure that control plane workloads are separate from other workloads in the management cluster.
13-
//lahinson - sept. 2023 - commenting out the following lines until those levels are supported for self-managed hypershift
14-
//* To ensure that control plane workloads are configured at one of the following multi-tenancy distribution levels:
15-
//** Everything shared: Control planes for hosted clusters can run on any node that is designated for control planes.
16-
//** Request serving isolation: Serving pods are requested in their own dedicated nodes.
17-
//** Nothing shared: Every control plane has its own dedicated nodes.
13+
* To ensure that control plane workloads are configured at the correct multi-tenancy distribution level for your deployment. The distribution levels are as follows:
14+
15+
** Everything shared: Control planes for hosted clusters can run on any node that is designated for control planes.
16+
** Request serving isolation: Serving pods are requested in their own dedicated nodes.
17+
** Nothing shared: Every control plane has its own dedicated nodes.
18+
19+
For more information about dedicating a node to a single hosted cluster, see "Labeling management cluster nodes".
1820

1921
[IMPORTANT]
2022
====
@@ -24,3 +26,4 @@ Do not use the management cluster for your workload. Workloads must not run on n
2426
include::modules/hcp-labels-taints.adoc[leveloffset=+1]
2527
include::modules/hcp-priority-classes.adoc[leveloffset=+1]
2628
include::modules/hcp-virt-taints-tolerations.adoc[leveloffset=+1]
29+
include::modules/hcp-isolation.adoc[leveloffset=+1]

modules/hcp-isolation.adoc

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="hcp-isolation_{context}"]
7+
= Control plane isolation
8+
9+
You can configure {hcp} to isolate network traffic or control plane pods.
10+
11+
== Network policy isolation
12+
13+
Each hosted control plane is assigned to run in a dedicated Kubernetes namespace. By default, the Kubernetes namespace denies all network traffic.
14+
15+
The following network traffic is allowed through the network policy that is enforced by the Kubernetes Container Network Interface (CNI):
16+
17+
* Ingress pod-to-pod communication in the same namespace (intra-tenant)
18+
* Ingress on port 6443 to the hosted `kube-apiserver` pod for the tenant
19+
* Metric scraping from the management cluster Kubernetes namespace with the `network.openshift.io/policy-group: monitoring` label is allowed for monitoring
20+
21+
== Control plane pod isolation
22+
23+
In addition to network policies, each hosted control plane pod is run with the `restricted` security context constraint. This policy denies access to all host features and requires pods to be run with a UID and with SELinux context that is allocated uniquely to each namespace that hosts a customer control plane.
24+
25+
The policy ensures the following constraints:
26+
27+
* Pods cannot run as privileged.
28+
* Pods cannot mount host directory volumes.
29+
* Pods must run as a user in a pre-allocated range of UIDs.
30+
* Pods must run with a pre-allocated MCS label.
31+
* Pods cannot access the host network namespace.
32+
* Pods cannot expose host network ports.
33+
* Pods cannot access the host PID namespace.
34+
* By default, pods drop the following Linux capabilities: `KILL`, `MKNOD`, `SETUID`, and `SETGID`.
35+
36+
The management components, such as `kubelet` and `crio`, on each management cluster worker node are protected by an SELinux label that is not accessible to the SELinux context for pods that support {hcp}.
37+
38+
The following SELinux labels are used for key processes and sockets:
39+
40+
* *kubelet*: `system_u:system_r:unconfined_service_t:s0`
41+
* *crio*: `system_u:system_r:container_runtime_t:s0`
42+
* *crio.sock*: `system_u:object_r:container_var_run_t:s0`
43+
* *<example user container processes>*: `system_u:system_r:container_t:s0:c14,c24`

0 commit comments

Comments
 (0)