You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By default, when you start MetalLB with the MetalLB Operator, the Operator starts an instance of a `speaker` pod on each node in the cluster.
9
+
Only the nodes with a `speaker` pod can advertise a load balancer IP address.
10
+
You can configure the `MetalLB` custom resource with a node selector to specify which nodes run the `speaker` pods.
11
+
12
+
The most common reason to limit the `speaker` pods to specific nodes is to ensure that only nodes with network interfaces on specific networks advertise load balancer IP addresses.
13
+
Only the nodes with a running `speaker` pod are advertised as destinations of the load balancer IP address.
14
+
15
+
If you limit the `speaker` pods to specific nodes and specify `local` for the external traffic policy of a service, then you must ensure that the application pods for the service are deployed to the same nodes.
16
+
17
+
.Example configuration to limit speaker pods to worker nodes
18
+
[source,yaml]
19
+
----
20
+
apiVersion: metallb.io/v1beta1
21
+
kind: MetalLB
22
+
metadata:
23
+
name: metallb
24
+
namespace: metallb-system
25
+
spec:
26
+
nodeSelector: <.>
27
+
node-role.kubernetes.io/worker: ""
28
+
----
29
+
<.> The example configuration specifies to assign the speaker pods to worker nodes, but you can specify labels that you assigned to nodes or any valid node selector.
30
+
31
+
After you apply a manifest with the `spec.nodeSelector` field, you can check the number of pods that the Operator deployed with the `oc get daemonset -n metallb-system speaker` command.
32
+
Similarly, you can display the nodes that match your labels with a command like `oc get nodes -l node-role.kubernetes.io/worker=`.
Copy file name to clipboardExpand all lines: modules/nw-metallb-software-components.adoc
+11-3Lines changed: 11 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,3 +1,7 @@
1
+
// Module included in the following assemblies:
2
+
//
3
+
// * networking/metallb/about-metallb.adoc
4
+
1
5
[id="nw-metallb-software-components_{context}"]
2
6
= MetalLB software components
3
7
@@ -12,14 +16,18 @@ The Operator starts the deployment and a single pod.
12
16
When you add a service of type `LoadBalancer`, Kubernetes uses the `controller` to allocate an IP address from an address pool.
13
17
14
18
`speaker`::
15
-
The Operator starts a daemon set with one `speaker` pod for each node in your cluster.
19
+
The Operator starts a daemon set for `speaker` pods.
20
+
By default, a pod is started on each node in your cluster.
21
+
You can limit the pods to specific nodes by specifying a node selector in the `MetalLB` custom resource when you start MetalLB.
16
22
+
17
-
For layer 2 mode, after the `controller` allocates an IP address for the service, each `speaker` pod determines if it is on the same node as an endpoint for the service.
18
-
An algorithm that involves hashing the node name and the service name is used to select a single `speaker` pod to announce the load balancer IP address.
23
+
For layer 2 mode, after the `controller` allocates an IP address for the service, the `speaker` pods use an algorithm to determine which `speaker` pod on which node will announce the load balancer IP address.
24
+
The algorithm involves hashing the node name and the load balancer IP address.
25
+
See the section about external traffic policy for more information.
19
26
// IETF treats protocol names as proper nouns.
20
27
The `speaker` uses Address Resolution Protocol (ARP) to announce IPv4 addresses and Neighbor Discovery Protocol (NDP) to announce IPv6 addresses.
21
28
+
22
29
Requests for the load balancer IP address are routed to the node with the `speaker` that announces the IP address.
23
30
After the node receives the packets, the service proxy routes the packets to an endpoint for the service.
24
31
The endpoint can be on the same node in the optimal case, or it can be on another node.
25
32
The service proxy chooses an endpoint each time a connection is established.
* For more information about node selectors, see xref:../../nodes/scheduling/nodes-scheduler-node-selectors.adoc#nodes-scheduler-node-selectors[Placing pods on specific nodes using node selectors].
0 commit comments