Skip to content

Commit 12d851d

Browse files
Updates
1 parent 9234b9e commit 12d851d

File tree

2 files changed

+55
-25
lines changed

2 files changed

+55
-25
lines changed

content/learning-paths/servers-and-cloud-computing/microbenchmark-network-iperf3/simulating-network-conditions.md

Lines changed: 28 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -6,17 +6,19 @@ weight: 4
66
layout: learningpathall
77
---
88

9-
## Add a delay to the TCP connection
9+
You can simulate latency and packet loss to test how your application performs under adverse network conditions. This is especially useful when evaluating the impact of congestion, jitter, or unreliable connections in distributed systems.
1010

11-
The Linux `tc` utility can be used to manipulate traffic control settings.
11+
## Add delay to the TCP connection
1212

13-
First, on the client system, find the name of network interface with the following command:
13+
The Linux `tc` (traffic control) lets you manipulate network interface behavior such as delay, loss, or reordering.
14+
15+
First, on the client system, identify the name of your network interface:
1416

1517
```bash
1618
ip addr show
1719
```
1820

19-
The output below shows the `ens5` network interface device (NIC) is the device we want to manipulate.
21+
The output below shows that the `ens5` network interface device (NIC) is the device that you want to manipulate.
2022

2123
```output
2224
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
@@ -34,7 +36,7 @@ The output below shows the `ens5` network interface device (NIC) is the device w
3436
3537
```
3638

37-
Run the following command on the client system to add an emulated delay of 10ms on `ens5`.
39+
Run the following command on the client system to add an emulated delay of 10ms on `ens5`:
3840

3941
```bash
4042
sudo tc qdisc add dev ens5 root netem delay 10ms
@@ -46,10 +48,6 @@ Rerun the basic TCP test as before on the client:
4648
iperf3 -c SERVER -V
4749
```
4850

49-
Observe that the `Cwnd` size has grew larger to compensate for the longer response time.
50-
51-
Additionally, the bitrate has dropped from ~4.9 to ~2.3 `Gbit/sec`.
52-
5351
```output
5452
[ 5] local 10.248.213.97 port 43170 connected to 10.248.213.104 port 5201
5553
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
@@ -75,19 +73,26 @@ rcv_tcp_congestion cubic
7573
7674
iperf Done.
7775
```
76+
### Observations
77+
78+
* The `Cwnd` size has grown larger to compensate for the longer response time.
79+
80+
* The bitrate has dropped from ~4.9 to ~2.3 `Gbit/sec` - demonstrating how even modest latency impacts throughput.
7881

7982
### Simulate packet loss
8083

81-
To test the resiliency of a distributed application you can add a simulated packet loss of 1%. As opposed to a 10ms delay, this will result in no acknowledgment being received for 1% of packets. Given TCP is a lossless protocol a retry must be sent.
84+
To test the resiliency of a distributed application, you can add a simulated packet loss of 1%. As opposed to a 10ms delay, this will result in no acknowledgment being received for 1% of packets.
85+
86+
Given TCP is a lossless protocol, a retry must be sent.
8287

83-
Run these commands on the client system:
88+
Run these commands on the client system. The first removes the delay configuration, and the second command introduces a 1% packet loss:
8489

8590
```bash
8691
sudo tc qdisc del dev ens5 root
8792
sudo tc qdisc add dev ens5 root netem loss 1%
8893
```
8994

90-
Rerunning the basic TCP test you see an increased number of retries (`Retr`) and a corresponding drop in bitrate.
95+
Now rerunning the basic TCP test, and you will see an increased number of retries (`Retr`) and a corresponding drop in bitrate:
9196

9297
```bash
9398
iperf3 -c SERVER -V
@@ -102,4 +107,14 @@ Test Complete. Summary Results:
102107
[ 5] 0.00-10.00 sec 4.40 GBytes 3.78 Gbits/sec receiver
103108
```
104109

105-
Refer to the `tc` [user documentation](https://man7.org/linux/man-pages/man8/tc.8.html) for the different ways to simulate perturbation and check resiliency.
110+
## Explore further with tc
111+
112+
The tc tool can simulate:
113+
114+
* Variable latency and jitter
115+
116+
* Packet duplication or reordering
117+
118+
* Bandwidth throttling
119+
120+
For advanced options, refer to Refer to the [tc man page](https://man7.org/linux/man-pages/man8/tc.8.html).

content/learning-paths/servers-and-cloud-computing/microbenchmark-network-iperf3/tuning.md

Lines changed: 27 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -6,23 +6,28 @@ weight: 5
66
layout: learningpathall
77
---
88

9-
### Connect from a local machine
9+
You can further optimize network performance by adjusting Linux kernel parameters and testing across different environments—including local-to-cloud scenarios.
10+
11+
## Connect from a local machine
1012

1113
You can look at ways to mitigate performance degradation due to events such as packet loss.
1214

1315
In this example, you will connect to the server node a local machine to demonstrate a longer response time. Check the iPerf3 [installation guide](https://iperf.fr/iperf-download.php) to install iPerf3 on other operating systems.
1416

15-
Make sure to set the server security group to accept the TCP connection from your local computer IP address. You will also need to use the public IP for the cloud instance.
17+
Before starting the test:
18+
19+
- Update your cloud server’s **security group** to allow incoming TCP connections from your local machine’s public IP.
20+
- Use the **public IP address** of the cloud instance when connecting.
1621

1722
Running iPerf3 on the local machine and connecting to the cloud server shows a longer round trip time, in this example more than 40ms.
1823

19-
On your local computer run:
24+
Run this command on your local computer:
2025

2126
```bash
2227
iperf3 -c <server-public-IP> -V
2328
```
2429

25-
Running a standard TCP client connection with iPerf3 shows an average bitrate of 157 Mbps compared to over 2 Gbps when the client and server are both in AWS.
30+
Compared to over 2 Gbit/sec within AWS, this test shows a reduced bitrate (~157 Mbit/sec) due to longer round-trip times (for example, >40ms).
2631

2732
```output
2833
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
@@ -33,9 +38,9 @@ Test Complete. Summary Results:
3338
[ 8] 0.00-10.03 sec 187 MBytes 156 Mbits/sec receiver
3439
```
3540

36-
### Modify kernel parameters
41+
### Modify kernel parameters on the server
3742

38-
On the server, your can configure Linux kernel runtime parameters with the `sysctl` command.
43+
On the server, you can configure Linux kernel runtime parameters with the `sysctl` command.
3944

4045
There are a plethora of values to tune that relate to performance and security. The following command can be used to list all available options. The [Linux kernel documentation](https://docs.kernel.org/networking/ip-sysctl.html#ip-sysctl) provides a more detailed description of each parameter.
4146

@@ -44,13 +49,15 @@ sysctl -a | grep tcp
4449
```
4550

4651
{{% notice Note %}}
47-
Depending on your operating system, some parameters may not be available. For example on AWS Ubuntu 22.04 LTS only the `cubic` and `reno` congestion control algorithms are available.
52+
Depending on your operating system, some parameters might not be available. For example, on AWS Ubuntu 22.04 LTS, only the `cubic` and `reno` congestion control algorithms are supported:
4853
```bash
4954
net.ipv4.tcp_available_congestion_control = reno cubic
5055
```
5156
{{% /notice %}}
5257

53-
You can increase the read and write max buffer sizes of the kernel on the server to enable more data to be held. This tradeoff results in increased memory utilization.
58+
## Increase TCP buffer sizes
59+
60+
You can increase the kernel's read and write buffer sizes on the server improve throughput on high-latency connections. This consumes more system memory but allows more in-flight data.
5461

5562
To try it, run the following commands on the server:
5663

@@ -59,19 +66,19 @@ sudo sysctl net.core.rmem_max=134217728 # default = 212992
5966
sudo sysctl net.core.wmem_max=134217728 # default = 212992
6067
```
6168

62-
Restart the iPerf3 server:
69+
Then, restart the iPerf3 server:
6370

6471
```bash
6572
iperf3 -s
6673
```
6774

68-
Run `iperf3` again on the local machine.
75+
Now rerun iPerf3 again on your local machine:
6976

7077
```bash
7178
iperf3 -c <server-public-IP> -V
7279
```
7380

74-
You see a significantly improved bitrate with no modification on the client side.
81+
Without changing anything on the client, the throughput improved by over 60%.
7582

7683
```output
7784
Test Complete. Summary Results:
@@ -81,4 +88,12 @@ Test Complete. Summary Results:
8188
8289
```
8390

84-
You now have an introduction to networking microbenchmarking and performance tuning.
91+
You’ve now completed a guided introduction to:
92+
93+
* Network performance microbenchmarking
94+
95+
* Simulating real-world network conditions
96+
97+
* Tuning kernel parameters for high-latency links
98+
99+
* Explore further by testing other parameters, tuning for specific congestion control algorithms, or integrating these benchmarks into CI pipelines for continuous performance evaluation.

0 commit comments

Comments
 (0)