You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Each client is dual-homed to corresponding leaves; To achieve that, interfaces `eth1` and `eth2` are formed into a `bond0` interface.
22
-
On the leaves side, the access interface `Ethernet-1/1`` is part of a LAG interface that is "stretched" between a pair of leaves, forming a logical construct similar to MC-LAG.
23
+
On the leaves side, the access interface `Ethernet-1/1` is part of a LAG interface that is "stretched" between a pair of leaves, forming a logical construct similar to MC-LAG.
eBGP peerings are formed between each leaf and spine pair.
33
34
34
35
## Fabric overlay
35
36
36
37
To support BGP EVPN service, in the overlay iBGP peerings with EVPN address family are established from each leaf to each spine, with spines acting as route reflectors.
Ethernet segments are configured to be in an all-active mode to make sure that every access link is utilized in the fabric.
45
46
@@ -63,12 +64,12 @@ git clone https://github.com/srl-labs/opergroup-lab.git && cd opergroup-lab
63
64
Lab repository contains startup configuration files for the fabric nodes, as well as necessary files for the telemetry stack to come up online operational. To deploy the lab:
64
65
65
66
```
66
-
containerlab deploy -t opergroup.clab.yml
67
+
containerlab deploy
67
68
```
68
69
69
-
This will stand up a lab with an already pre-configured fabric using startup configs contained within [`configs`](https://github.com/srl-labs/opergroup-lab/tree/main/configs) directory.
70
+
This will bring up a lab with an already pre-configured fabric using startup configs contained within [`configs`](https://github.com/srl-labs/opergroup-lab/tree/main/configs) directory.
The deployed lab starts up in a pre-provisioned step, where underlay/overlay configuration has already been done. We proceed with oper-group use case exploration in the next chapter of this tutorial.
Now that we are [aware of a potential traffic blackholing](problem-statement.md#traffic-loss-scenario) that may happen in the all-active EVPN-based fabrics it is time to meet one of the remediation tactics.
One of the most common use cases that can be covered with the Event Handler framework is known as "Operational group" or "Oper-group" for short. An oper-group feature covers several use cases, but in essence, it creates a relationship between logical elements of a network node so that they become aware of each other - forming a logical group.
26
26
27
27
In the data center space oper-group feature can tackle the problem of traffic black-holing when leaves lose all connectivity to the spine layer. Consider the following simplified Clos topology where clients are multi-homed to leaves:
With EVPN [all-active multihoming](https://documentation.nokia.com/srlinux/22-3/SR_Linux_Book_Files/Advanced_Solutions_Guide/evpn-l2-multihome.html#ariaid-title22) enabled in fabric traffic from `client1` is load-balanced over the links attached to the upstream leaves and propagates via fabric to its destination.
32
32
33
33
Since all links of a client' bond interface are active, traffic is hashed to each of the constituent links and thus utilizes all available bandwidth. A problem occurs when a leaf looses connectivity to all upstream spines, as illustrated below:
When `leaf1` loses its uplinks, traffic from `client1` still gets sent to it since the client is not aware of any link loss problems happening on the leaf. This results in traffic blackholing on `leaf1`.
38
38
39
39
To remedy this particular failure scenario an oper-group can be used. The idea here is to make a logical grouping between certain uplink and downlink interfaces on the leaves so that downlinks would share fate with uplink status. In our example, oper-group can be configured in such a way that leaves will shutdown their downlink interfaces should they detect that uplinks went down. This operational group's workflow depicted below:
When a leaf loses its uplinks, the oper-group gets notified about that fact and reacts accordingly by operationally disabling the access link towards the client. Once the leaf's downlink transitions to a `down` state, the client's bond interface stops using that particular interface for hashing, and traffic moves over to healthy links. In our example, the client stops sending to `leaf1` and everything gets sent over to `leaf2`.
44
44
45
45
In this tutorial, we will see how SR Linux's Event Handler framework enables oper-group capability.
46
46
47
47
[^1]: the following versions have been used to create this tutorial. The newer versions might work; please pin the version to the mentioned ones if they don't.
Copy file name to clipboardExpand all lines: docs/tutorials/programmability/event-handler/oper-group/problem-statement.md
+26-16Lines changed: 26 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,6 +6,7 @@ Before we meet the Event Handler framework of SR Linux and leverage it to config
6
6
As was mentioned in the [introduction](oper-group-intro.md), without oper-group feature traffic loss can occur should any leaf lose all its uplinks. Let's lab a couple of scenarios that highlight a problem that oper-group is set to remedy.
7
7
8
8
## Healthy fabric scenario
9
+
9
10
The startup configuration that our lab is equipped with gets our fabric to a state where traffic can be exchanged between clients. Users can verify that by running a simple iperf-based traffic test.
10
11
11
12
In our lab, `client2` runs iperf3 server, while `client1` acts as a client. With the following command we can run a single stream of TCP data with a bitrate of 200 Kbps:
@@ -35,20 +36,22 @@ Connecting to host 192.168.100.2, port 5201
This visualization tells us that `client1` hashed its single stream[^1] over `client1:eth2` interface that connects to `leaf2:e1-1`. On the "Leaf2 e1-1 throughput" panel in the bottom right we see incoming traffic that indicates data is flowing in via this interface.
42
44
43
45
Next, we see that `leaf2` used its `e1-50` interface to send data over to a spine layer, through which it reaches `client2` side[^2].
44
46
45
47
### Load balancing on the client side
46
-
Next, it is interesting to verify that client can utilize both links in its `bond0` interface since our L2 EVPN service uses an all-active multihoming mode for the ethernet segment. To test that we need to tell iperf to use at least two parallel streams; that is what `-P` flag is for.
47
48
48
-
With the following command we start two parallel streams, 200 Kbps bitrate each, and this time for 20 seconds.
49
+
Next, it is interesting to verify that client can utilize both links in its `bond0` interface since our L2 EVPN service uses an all-active multihoming mode for the ethernet segment. To test that we need to tell iperf to use eight parallel streams; that is what `-P` flag is for.
50
+
51
+
With the following command we eight parallel streams, 50 Kbps bitrate each, and this time for 20 seconds.
Our telemetry visualization makes it clear that client-side load balancing is indeed happening as both leaves receive traffic on their `e-1/1` interface.
@@ -57,40 +60,47 @@ Our telemetry visualization makes it clear that client-side load balancing is in
57
60
58
61
`leaf1` and `leaf2` both chose to use their `e1-49` interface to send the traffic to the spine layer.
59
62
60
-
??? "Load balancing in the fabric?"
61
-
You may have noticed that when we sent two parallel streamsclient hashed two streams over two links in its bond interface. But then leaves used a single uplink interface towards the fabric. This is due to the fact that each leaf got a single "stream" and thus a single uplink interface was utilized.
63
+
/// details | Load balancing in the fabric?
64
+
You may have noticed that when we send a few streams (for example two parallel streams), the client may hash the two streams over two links in its bond interface. But then leaves used a single uplink interface towards the fabric. This is due to the fact that each leaf got a single "stream" and thus a single uplink interface was utilized.
62
65
63
-
We can see ECMP in the fabric happening if we send more streams, for example, eight of them:
We can see ECMP in the fabric happening if we send more streams, for example, eight of them:
67
67
68
-
That way leaves will have more streams to handle and they will load balance the streams nicely as shown in [this picture](https://gitlab.com/rdodin/pics/-/wikis/uploads/85bd945ff272db2da4d4cd1132c47803/image.png).
That way leaves will have more streams to handle and they will load balance the streams nicely as shown in [this picture](https://gitlab.com/rdodin/pics/-/wikis/uploads/85bd945ff272db2da4d4cd1132c47803/image.png).
73
+
///
69
74
70
75
## Traffic loss scenario
76
+
71
77
Now to the interesting part. What happens if one of the leaves suddenly loses all its uplinks while traffic is mid-flight? Will traffic be re-routed to healthy leaf? Will it be dropped? Let's lab it out.
72
78
73
79
We will send 4 streams for 40 seconds long and somewhere in the middle we will execute `set-uplinks.sh` script which administratively disables uplinks on a given leaf:
* [00:00 - 00:15] We started four streams 200Kbps bitrate each, summing up to 800Kbps. Those for streams were evenly distributed over the two links of a bond interface of our `client1`.
93
-
Both leaves report 400 Kbps of traffic detected on their `e1-1` interface, so each leaf handles two streams each.
102
+
* [00:00 - 00:15] We started eight streams. Those for streams were evenly distributed over the two links of a bond interface of our `client1`.
103
+
Both leaves report the same amount of traffic detected on their `e1-1` interface, so each leaf handles two streams each.
94
104
Leaves then load balance these two streams over their two uplinks. We see that both `e1-49` and `e1-50` report outgoing bitrate to be ~200Kbps, which is a bitrate of a single stream we configured. That way every uplink on our leaves is utilized and handling a stream of data.
95
105
* [00:34 - 01:00] At this very moment, we execute `bash set-uplinks.sh leaf1 disable` putting uplinks on `leaf1` administratively down. The bottom left panel immediately indicates that the operational status of both uplinks went down.
96
106
But pay close attention to what is happening with traffic throughput. Traffic rate on `leaf1` access interface drops immediately, as TCP sessions of the streams it was handling stopped to receive ACKs.
@@ -99,4 +109,4 @@ Let's see what exactly is happening there.
99
109
This scenario opens the stage for oper-group, as this feature provides means to make sure that a client won't use a link that is connected to a leaf that has no means to forward traffic to the fabric.
100
110
101
111
[^1]: iperf3 sends data as a single stream, until `-P` flag is set.
102
-
[^2]: when you start traffic for the first time, you might wonder why a leaf that is not used for traffic forwarding gets some traffic on its uplink interface for a brief moment as shown [here](https://twitter.com/ntdvps/status/1522265449265864706). Check out this link to see why is this happening.
112
+
[^2]: when you start traffic for the first time, you might wonder why a leaf that is not used for traffic forwarding gets some traffic on its uplink interface for a brief moment as shown [here](https://twitter.com/ntdvps/status/1522265449265864706). Check out this link to see why is this happening.
0 commit comments