You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/microbenchmark-network-iperf3/simulating-network-conditions.md
+28-13Lines changed: 28 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,17 +6,19 @@ weight: 4
6
6
layout: learningpathall
7
7
---
8
8
9
-
## Add a delay to the TCP connection
9
+
You can simulate latency and packet loss to test how your application performs under adverse network conditions. This is especially useful when evaluating the impact of congestion, jitter, or unreliable connections in distributed systems.
10
10
11
-
The Linux `tc` utility can be used to manipulate traffic control settings.
11
+
## Add delay to the TCP connection
12
12
13
-
First, on the client system, find the name of network interface with the following command:
13
+
The Linux `tc` (traffic control) lets you manipulate network interface behavior such as delay, loss, or reordering.
14
+
15
+
First, on the client system, identify the name of your network interface:
14
16
15
17
```bash
16
18
ip addr show
17
19
```
18
20
19
-
The output below shows the `ens5` network interface device (NIC) is the device we want to manipulate.
21
+
The output below shows that the `ens5` network interface device (NIC) is the device that you want to manipulate.
20
22
21
23
```output
22
24
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
@@ -34,7 +36,7 @@ The output below shows the `ens5` network interface device (NIC) is the device w
34
36
35
37
```
36
38
37
-
Run the following command on the client system to add an emulated delay of 10ms on `ens5`.
39
+
Run the following command on the client system to add an emulated delay of 10ms on `ens5`:
38
40
39
41
```bash
40
42
sudo tc qdisc add dev ens5 root netem delay 10ms
@@ -46,10 +48,6 @@ Rerun the basic TCP test as before on the client:
46
48
iperf3 -c SERVER -V
47
49
```
48
50
49
-
Observe that the `Cwnd` size has grew larger to compensate for the longer response time.
50
-
51
-
Additionally, the bitrate has dropped from ~4.9 to ~2.3 `Gbit/sec`.
52
-
53
51
```output
54
52
[ 5] local 10.248.213.97 port 43170 connected to 10.248.213.104 port 5201
55
53
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
@@ -75,19 +73,26 @@ rcv_tcp_congestion cubic
75
73
76
74
iperf Done.
77
75
```
76
+
### Observations
77
+
78
+
* The `Cwnd` size has grown larger to compensate for the longer response time.
79
+
80
+
* The bitrate has dropped from ~4.9 to ~2.3 `Gbit/sec` - demonstrating how even modest latency impacts throughput.
78
81
79
82
### Simulate packet loss
80
83
81
-
To test the resiliency of a distributed application you can add a simulated packet loss of 1%. As opposed to a 10ms delay, this will result in no acknowledgment being received for 1% of packets. Given TCP is a lossless protocol a retry must be sent.
84
+
To test the resiliency of a distributed application, you can add a simulated packet loss of 1%. As opposed to a 10ms delay, this will result in no acknowledgment being received for 1% of packets.
85
+
86
+
Given TCP is a lossless protocol, a retry must be sent.
82
87
83
-
Run these commands on the client system:
88
+
Run these commands on the client system. The first removes the delay configuration, and the second command introduces a 1% packet loss:
84
89
85
90
```bash
86
91
sudo tc qdisc del dev ens5 root
87
92
sudo tc qdisc add dev ens5 root netem loss 1%
88
93
```
89
94
90
-
Rerunning the basic TCP testyou see an increased number of retries (`Retr`) and a corresponding drop in bitrate.
95
+
Now rerunning the basic TCP test, and you will see an increased number of retries (`Retr`) and a corresponding drop in bitrate:
91
96
92
97
```bash
93
98
iperf3 -c SERVER -V
@@ -102,4 +107,14 @@ Test Complete. Summary Results:
Refer to the `tc`[user documentation](https://man7.org/linux/man-pages/man8/tc.8.html) for the different ways to simulate perturbation and check resiliency.
110
+
## Explore further with tc
111
+
112
+
The tc tool can simulate:
113
+
114
+
* Variable latency and jitter
115
+
116
+
* Packet duplication or reordering
117
+
118
+
* Bandwidth throttling
119
+
120
+
For advanced options, refer to Refer to the [tc man page](https://man7.org/linux/man-pages/man8/tc.8.html).
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/microbenchmark-network-iperf3/tuning.md
+27-12Lines changed: 27 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,23 +6,28 @@ weight: 5
6
6
layout: learningpathall
7
7
---
8
8
9
-
### Connect from a local machine
9
+
You can further optimize network performance by adjusting Linux kernel parameters and testing across different environments—including local-to-cloud scenarios.
10
+
11
+
## Connect from a local machine
10
12
11
13
You can look at ways to mitigate performance degradation due to events such as packet loss.
12
14
13
15
In this example, you will connect to the server node a local machine to demonstrate a longer response time. Check the iPerf3 [installation guide](https://iperf.fr/iperf-download.php) to install iPerf3 on other operating systems.
14
16
15
-
Make sure to set the server security group to accept the TCP connection from your local computer IP address. You will also need to use the public IP for the cloud instance.
17
+
Before starting the test:
18
+
19
+
- Update your cloud server’s **security group** to allow incoming TCP connections from your local machine’s public IP.
20
+
- Use the **public IP address** of the cloud instance when connecting.
16
21
17
22
Running iPerf3 on the local machine and connecting to the cloud server shows a longer round trip time, in this example more than 40ms.
18
23
19
-
On your local computer run:
24
+
Run this command on your local computer:
20
25
21
26
```bash
22
27
iperf3 -c <server-public-IP> -V
23
28
```
24
29
25
-
Running a standard TCP client connection with iPerf3 shows an average bitrate of 157 Mbps compared to over 2 Gbps when the client and server are both in AWS.
30
+
Compared to over 2 Gbit/sec within AWS, this test shows a reduced bitrate (~157 Mbit/sec) due to longer round-trip times (for example, >40ms).
26
31
27
32
```output
28
33
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
On the server, your can configure Linux kernel runtime parameters with the `sysctl` command.
43
+
On the server, you can configure Linux kernel runtime parameters with the `sysctl` command.
39
44
40
45
There are a plethora of values to tune that relate to performance and security. The following command can be used to list all available options. The [Linux kernel documentation](https://docs.kernel.org/networking/ip-sysctl.html#ip-sysctl) provides a more detailed description of each parameter.
41
46
@@ -44,13 +49,15 @@ sysctl -a | grep tcp
44
49
```
45
50
46
51
{{% notice Note %}}
47
-
Depending on your operating system, some parameters may not be available. For example on AWS Ubuntu 22.04 LTS only the `cubic` and `reno` congestion control algorithms are available.
52
+
Depending on your operating system, some parameters might not be available. For example, on AWS Ubuntu 22.04 LTS, only the `cubic` and `reno` congestion control algorithms are supported:
You can increase the read and write max buffer sizes of the kernel on the server to enable more data to be held. This tradeoff results in increased memory utilization.
58
+
## Increase TCP buffer sizes
59
+
60
+
You can increase the kernel's read and write buffer sizes on the server improve throughput on high-latency connections. This consumes more system memory but allows more in-flight data.
54
61
55
62
To try it, run the following commands on the server:
You see a significantly improved bitrate with no modification on the client side.
81
+
Without changing anything on the client, the throughput improved by over 60%.
75
82
76
83
```output
77
84
Test Complete. Summary Results:
@@ -81,4 +88,12 @@ Test Complete. Summary Results:
81
88
82
89
```
83
90
84
-
You now have an introduction to networking microbenchmarking and performance tuning.
91
+
You’ve now completed a guided introduction to:
92
+
93
+
* Network performance microbenchmarking
94
+
95
+
* Simulating real-world network conditions
96
+
97
+
* Tuning kernel parameters for high-latency links
98
+
99
+
* Explore further by testing other parameters, tuning for specific congestion control algorithms, or integrating these benchmarks into CI pipelines for continuous performance evaluation.
0 commit comments