Skip to content

Commit 1eb3208

Browse files
authored
Merge pull request #49447 from rohennes/TELCODOCS-652-metalLB-params
TELCODOCS#652: adding new params for MetalLB deployment
2 parents 1c7ec9f + b5a2bfe commit 1eb3208

7 files changed

+243
-4
lines changed
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * networking/metallb/metallb-operator-install.adoc
4+
5+
[id="nw-metallb-operator-deployment-specifications-for-metallb_{context}"]
6+
= Deployment specifications for MetalLB
7+
8+
When you start an instance of MetalLB using the `MetalLB` custom resource, you can configure deployment specifications in the `MetalLB` custom resource to manage how the `controller` or `speaker` pods deploy and run in your cluster. Use these deployment specifications to manage the following tasks:
9+
10+
* Select nodes for MetalLB pod deployment.
11+
* Manage scheduling by using pod priority and pod affinity.
12+
* Assign CPU limits for MetalLB pods.
13+
* Assign a container RuntimeClass for MetalLB pods.
14+
* Assign metadata for MetalLB pods.
Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * networking/metallb/metallb-operator-install.adoc
4+
5+
[id="nw-metallb-operator-setting-pod-CPU-limits_{context}"]
6+
= Configuring pod CPU limits in a MetalLB deployment
7+
8+
You can optionally assign pod CPU limits to `controller` and `speaker` pods by configuring the `MetalLB` custom resource. Defining CPU limits for the `controller` or `speaker` pods helps you to manage compute resources on the node. This ensures all pods on the node have the necessary compute resources to manage workloads and cluster housekeeping.
9+
10+
.Prerequisites
11+
12+
* You are logged in as a user with `cluster-admin` privileges.
13+
14+
* You have installed the MetalLB Operator.
15+
16+
.Procedure
17+
. Create a `MetalLB` custom resource file, such as `CPULimits.yaml`, to specify the `cpu` value for the `controller` and `speaker` pods:
18+
+
19+
[source,yaml]
20+
----
21+
apiVersion: metallb.io/v1beta1
22+
kind: MetalLB
23+
metadata:
24+
name: metallb
25+
namespace: metallb-system
26+
spec:
27+
logLevel: debug
28+
controllerConfig:
29+
resources:
30+
limits:
31+
cpu: "200m"
32+
speakerConfig:
33+
resources:
34+
limits:
35+
cpu: "300m"
36+
----
37+
38+
. Apply the `MetalLB` custom resource configuration:
39+
+
40+
[source,bash]
41+
----
42+
$ oc apply -f CPULimits.yaml
43+
----
44+
45+
.Verification
46+
* To view compute resources for a pod, run the following command, replacing `<pod_name>` with your target pod:
47+
+
48+
[source,bash]
49+
----
50+
$ oc describe pod <pod_name>
51+
----
Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * networking/metallb/metallb-operator-install.adoc
4+
5+
[id="nw-metallb-operator-setting-pod-priority-affinity_{context}"]
6+
= Configuring pod priority and pod affinity in a MetalLB deployment
7+
8+
You can optionally assign pod priority and pod affinity rules to `controller` and `speaker` pods by configuring the `MetalLB` custom resource. The pod priority indicates the relative importance of a pod on a node and schedules the pod based on this priority. Set a high priority on your `controller` or `speaker` pod to ensure scheduling priority over other pods on the node.
9+
10+
Pod affinity manages relationships among pods. Assign pod affinity to the `controller` or `speaker` pods to control on what node the scheduler places the pod in the context of pod relationships. For example, you can allow pods with logically related workloads on the same node, or ensure pods with conflicting workloads are on separate nodes.
11+
12+
.Prerequisites
13+
14+
* You are logged in as a user with `cluster-admin` privileges.
15+
16+
* You have installed the MetalLB Operator.
17+
18+
.Procedure
19+
. Create a `PriorityClass` custom resource, such as `myPriorityClass.yaml`, to configure the priority level. This example uses a high-priority class:
20+
+
21+
[source,yaml]
22+
----
23+
apiVersion: scheduling.k8s.io/v1
24+
kind: PriorityClass
25+
metadata:
26+
name: high-priority
27+
value: 1000000
28+
----
29+
30+
. Apply the `PriorityClass` custom resource configuration:
31+
+
32+
[source,bash]
33+
----
34+
$ oc apply -f myPriorityClass.yaml
35+
----
36+
37+
. Create a `MetalLB` custom resource, such as `MetalLBPodConfig.yaml`, to specify the `priorityClassName` and `podAffinity` values:
38+
+
39+
[source,yaml]
40+
----
41+
apiVersion: metallb.io/v1beta1
42+
kind: MetalLB
43+
metadata:
44+
name: metallb
45+
namespace: metallb-system
46+
spec:
47+
logLevel: debug
48+
controllerConfig:
49+
priorityClassName: high-priority
50+
runtimeClassName: myclass
51+
speakerConfig:
52+
priorityClassName: high-priority
53+
runtimeClassName: myclass
54+
affinity:
55+
podAffinity:
56+
requiredDuringSchedulingIgnoredDuringExecution:
57+
- labelSelector:
58+
matchLabels:
59+
app: metallb
60+
topologyKey: kubernetes.io/hostname
61+
----
62+
63+
. Apply the `MetalLB` custom resource configuration:
64+
+
65+
[source,bash]
66+
----
67+
$ oc apply -f MetalLBPodConfig.yaml
68+
----
69+
70+
.Verification
71+
* To view the priority class that you assigned to pods in a namespace, run the following command, replacing `<namespace>` with your target namespace:
72+
+
73+
[source,bash]
74+
----
75+
$ oc get pods -n <namespace> -o custom-columns=NAME:.metadata.name,PRIORITY:.spec.priorityClassName
76+
----
77+
78+
* To verify that the scheduler placed pods according to pod affinity rules, view the metadata for the pod's node by running the following command, replacing `<namespace>` with your target namespace:
79+
+
80+
[source,bash]
81+
----
82+
$ oc get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name -n <namespace>
83+
----
Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * networking/metallb/metallb-operator-install.adoc
4+
5+
[id="nw-metallb-operator-setting-runtimeclass_{context}"]
6+
= Configuring a container runtime class in a MetalLB deployment
7+
8+
You can optionally assign a container runtime class to `controller` and `speaker` pods by configuring the `MetalLB` custom resource. For example, for Windows workloads, you can assign a Windows runtime class to the pod, which uses this runtime class for all containers in the pod.
9+
10+
.Prerequisites
11+
12+
* You are logged in as a user with `cluster-admin` privileges.
13+
14+
* You have installed the MetalLB Operator.
15+
16+
.Procedure
17+
. Create a `RuntimeClass` custom resource, such as `myRuntimeClass.yaml`, to define your runtime class:
18+
+
19+
[source,yaml,options="nowrap",role="white-space-pre"]
20+
----
21+
apiVersion: node.k8s.io/v1
22+
kind: RuntimeClass
23+
metadata:
24+
name: myclass
25+
handler: myconfiguration
26+
----
27+
28+
. Apply the `RuntimeClass` custom resource configuration:
29+
+
30+
[source,bash]
31+
----
32+
$ oc apply -f myRuntimeClass.yaml
33+
----
34+
35+
. Create a `MetalLB` custom resource, such as `MetalLBRuntime.yaml`, to specify the `runtimeClassName` value:
36+
+
37+
[source,yaml]
38+
----
39+
apiVersion: metallb.io/v1beta1
40+
kind: MetalLB
41+
metadata:
42+
name: metallb
43+
namespace: metallb-system
44+
spec:
45+
logLevel: debug
46+
controllerConfig:
47+
runtimeClassName: myclass
48+
annotations: <1>
49+
controller: demo
50+
speakerConfig:
51+
runtimeClassName: myclass
52+
annotations: <1>
53+
speaker: demo
54+
----
55+
<1> This example uses `annotations` to add metadata such as build release information or GitHub pull request information. You can populate annotations with characters not permitted in labels. However, you cannot use annotations to identify or select objects.
56+
57+
. Apply the `MetalLB` custom resource configuration:
58+
+
59+
[source,bash,options="nowrap",role="white-space-pre"]
60+
----
61+
$ oc apply -f MetalLBRuntime.yaml
62+
----
63+
64+
.Verification
65+
* To view the container runtime for a pod, run the following command:
66+
+
67+
[source,bash,options="nowrap",role="white-space-pre"]
68+
----
69+
$ oc get pod -o custom-columns=NAME:metadata.name,STATUS:.status.phase,RUNTIME_CLASS:.spec.runtimeClassName
70+
----

modules/nw-metallb-software-components.adoc

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,12 @@
77

88
When you install the MetalLB Operator, the `metallb-operator-controller-manager` deployment starts a pod. The pod is the implementation of the Operator. The pod monitors for changes to all the relevant resources.
99

10-
When the Operator starts an instance of MetalLB, it starts a `controller` deployment and a `speaker` daemon set.
10+
When the Operator starts an instance of MetalLB, it starts a `controller` deployment and a `speaker` daemon set.
11+
12+
[NOTE]
13+
====
14+
You can configure deployment specifications in the MetalLB custom resource to manage how `controller` and `speaker` pods deploy and run in your cluster. For more information about these deployment specifications, see the _Additional Resources_ section.
15+
====
1116

1217
`controller`::
1318
The Operator starts the deployment and a single pod. When you add a service of type `LoadBalancer`, Kubernetes uses the `controller` to allocate an IP address from an address pool.

networking/metallb/about-metallb.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,3 +52,5 @@ MetalLB is incompatible with the IP failover feature. Before you install the Met
5252
* xref:../../networking/configuring_ingress_cluster_traffic/overview-traffic.adoc#overview-traffic-comparision_overview-traffic[Comparison: Fault tolerant access to external IP addresses]
5353

5454
* xref:../../networking/configuring-ipfailover.adoc#nw-ipfailover-remove_configuring-ipfailover[Removing IP failover]
55+
56+
* xref:../../networking/metallb/metallb-operator-install.adoc#nw-metallb-operator-deployment-specifications-for-metallb_metallb-operator-install[Deployment specifications for MetalLB]

networking/metallb/metallb-operator-install.adoc

Lines changed: 17 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,15 +20,29 @@ include::modules/nw-metallb-installing-operator-cli.adoc[leveloffset=+1]
2020
// Starting MetalLB on your cluster
2121
include::modules/nw-metallb-operator-initial-config.adoc[leveloffset=+1]
2222

23-
// Limit speaker pods to specific nodes
23+
// Deployment specifications for metallb CR
24+
include::modules/nw-metallb-operator-deployment-specifications-for-metallb.adoc[leveloffset=+1]
25+
26+
// Deployment specs to limit speaker pods to specific nodes
2427
include::modules/nw-metallb-operator-limit-speaker-to-nodes.adoc[leveloffset=+2]
2528

29+
// Deployment specs to set pod priority and pod ffinity
30+
include::modules/nw-metallb-operator-setting-pod-priority-affinity.adoc[leveloffset=+2]
31+
32+
// Deployment specs to set pod CPU limits
33+
include::modules/nw-metallb-operator-setting-pod-CPU-limits.adoc[leveloffset=+2]
34+
35+
// Deployment specs to set RuntimeClass
36+
include::modules/nw-metallb-operator-setting-runtimeclass.adoc[leveloffset=+2]
37+
2638
[role="_additional-resources"]
2739
[id="additional-resources_metallb-operator-install"]
2840
== Additional resources
2941

30-
* xref:../../nodes/scheduling/nodes-scheduler-node-selectors.adoc#nodes-scheduler-node-selectors[Placing pods on specific nodes using node selectors].
31-
* xref:../../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-scheduler-taints-tolerations-about[Understanding taints and tolerations].
42+
* xref:../../nodes/scheduling/nodes-scheduler-node-selectors.adoc#nodes-scheduler-node-selectors[Placing pods on specific nodes using node selectors]
43+
* xref:../../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-scheduler-taints-tolerations-about[Understanding taints and tolerations]
44+
* xref:../../nodes/pods/nodes-pods-priority.adoc#nodes-pods-priority-about_nodes-pods-priority[Understanding pod priority]
45+
* xref:../../nodes/scheduling/nodes-scheduler-pod-affinity.adoc#nodes-scheduler-pod-affinity-about_nodes-scheduler-pod-affinity[Understanding pod affinity]
3246

3347
[id="next-steps_{context}"]
3448
== Next steps

0 commit comments

Comments
 (0)