Skip to content

Commit 2125fe5

Browse files
Updates
1 parent 0188350 commit 2125fe5

File tree

5 files changed

+65
-45
lines changed

5 files changed

+65
-45
lines changed

content/learning-paths/servers-and-cloud-computing/microbenchmark-network-iperf3/_index.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,17 @@
11
---
22
title: Microbenchmark and tune network performance with iPerf3 and Linux traffic control
33

4-
whminutes_to_complete: 30
4+
minutes_to_complete: 30
55

6-
who_is_this_for: This is an introductory topic for performance engineers, Linux system administrators, or application developers who want to microbenchmark, simulate, or tune the networking performance of distributed systems.
6+
who_is_this_for: This is an introductory topic for performance engineers, Linux system administrators, and application developers who want to microbenchmark, simulate, or tune the networking performance of distributed systems.
77

88
learning_objectives:
9-
- Understand how to use iPerf3 for network microbenchmarking.
10-
- Use Linux Traffic Control (TC) to simulate different network conditions.
11-
- Identify and apply basic runtime parameters to tune performance.
9+
- Run accurate network microbenchmark tests using iPerf3.
10+
- Simulate real-world network conditions using Linux Traffic Control (tc).
11+
- Tune basic Linux kernel parameters to improve network performance.
1212

1313
prerequisites:
14-
- Foundational understanding of networking principles such as TCP/IP and UDP.
14+
- Basic understanding of networking principles such as Transmission Control Protocol/Internet Protocol (TCP/IP) and User Datagram Protocol (UDP).
1515
- Access to two [Arm-based cloud instances](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/).
1616

1717
author: Kieran Hejmadi

content/learning-paths/servers-and-cloud-computing/microbenchmark-network-iperf3/basic-microbenchmarking.md

Lines changed: 36 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -6,17 +6,19 @@ weight: 3
66
layout: learningpathall
77
---
88

9-
With your systems configured and reachable, you can now use iPerf3 to microbenchmark TCP and UDP performance between your Arm-based systems
9+
With your systems configured and reachable, you can now use iPerf3 to microbenchmark TCP and UDP performance between your Arm-based systems.
1010

1111
## Microbenchmark the TCP connection
1212

13-
First, start by running `iperf` in server mode on the `SERVER` system with the following command:
13+
Start by running `iperf` in server mode on the `SERVER` system:
1414

1515
```bash
1616
iperf3 -s
1717
```
1818

19-
This starts the server on the default TCP port 5201. You should see:
19+
This starts the server on the default TCP port 5201.
20+
21+
You should see:
2022

2123
```output
2224
-----------------------------------------------------------
@@ -25,23 +27,23 @@ Server listening on 5201 (test #1)
2527
2628
```
2729

28-
The default server port is 5201. Use the `-p` flag to specify another port if it is in use.
30+
The default server port is 5201. If it is already in use, use the `-p` flag to specify another.
2931

3032
{{% notice Tip %}}
31-
If you already have an `iperf3` server running, you can kill the process with the following command:
33+
If you already have an `iperf3` server running, terminate it with:
3234
```bash
3335
sudo kill $(pgrep iperf3)
3436
```
3537
{{% /notice %}}
3638

3739
## Run a TCP test from the client
3840

39-
Next, on the client node, run the following command to run a simple 10-second microbenchmark using the TCP protocol:
41+
On the client node, run the following command to run a simple 10-second microbenchmark using the TCP protocol:
4042

4143
```bash
42-
iperf3 -c SERVER -V
44+
iperf3 -c SERVER -v
4345
```
44-
Replace `SERVER` with your server’s IP address or hostname. The -V flag enables verbose output.
46+
Replace `SERVER` with your server’s hostname or private IP address. The `-v` flag enables verbose output.
4547

4648
The output is similar to:
4749

@@ -85,15 +87,33 @@ iperf Done.
8587

8688
## UDP result highlights
8789

88-
You can also microbenchmark the `UDP` protocol with the `-u` flag. As a reminder, UDP does not guarantee packet delivery with some packets being lost. As such you need to observe the statistics on the server side to see the percent of packets lost and the variation in packet arrival time (jitter). The UDP protocol is widely used in applications that need timely packet delivery, such as online gaming and video calls.
90+
You can also microbenchmark the `UDP` protocol using the `-u` flag with iPerf3. Unlike TCP, UDP does not guarantee packet delivery which means some packets might be lost in transit.
91+
92+
To evaluate UDP performance, focus on the server-side statistics, particularly:
93+
94+
* Packet loss percentage
95+
96+
* Jitter (variation in packet arrival time)
97+
98+
These metrics help assess reliability and responsiveness under real-time conditions.
99+
100+
UDP is commonly used in latency-sensitive applications such as:
89101

90-
Run the following command from the client to send 2 parallel UDP streams with the `-P 2` option.
102+
* Online gaming
103+
104+
* Voice over IP (VoIP)
105+
106+
* Video conferencing and streaming
107+
108+
Because it avoids the overhead of retransmission and ordering, UDP is ideal for scenarios where timely delivery matters more than perfect accuracy.
109+
110+
Run the following command from the client to send two parallel UDP streams with the `-P 2` option:
91111

92112
```bash
93-
iperf3 -c SERVER -V -u -P 2
113+
iperf3 -c SERVER -v -u -P 2
94114
```
95115

96-
Looking at the server output you observe 0% of packets where lost for the short test.
116+
Look at the server output and you can see that none (0%) of packets were lost for the short test:
97117

98118
```output
99119
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
@@ -102,8 +122,10 @@ Looking at the server output you observe 0% of packets where lost for the short
102122
[SUM] 0.00-10.00 sec 2.51 MBytes 2.10 Mbits/sec 0.015 ms 0/294 (0%) receiver
103123
```
104124

105-
Additionally on the client side, the 2 streams saturated 2 of the 4 cores in the system.
125+
Additionally on the client side, the two streams saturated two of the four cores in the system:
106126

107127
```output
108128
CPU Utilization: local/sender 200.3% (200.3%u/0.0%s), remote/receiver 0.2% (0.0%u/0.2%s)
109-
```
129+
```
130+
131+
This demonstrates that UDP throughput is CPU-bound when pushing multiple streams.

content/learning-paths/servers-and-cloud-computing/microbenchmark-network-iperf3/setup.md

Lines changed: 17 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -6,21 +6,23 @@ weight: 2
66
layout: learningpathall
77
---
88

9-
## Configure two Arm-based Linux computers
9+
## Environment setup and Learning Path focus
1010

11-
To benchmark bandwidth and latency between Arm-based systems, you'll need to configure two Linux machines running on Arm. You can use AWS EC2 instances with Graviton processors, or Linux virtual machines from any other cloud service provider.
11+
To benchmark bandwidth and latency between Arm-based systems, you'll need to configure two Linux machines running on Arm.
1212

13-
This tutorial also walks you through a local-to-cloud test to compare performance between:
13+
You can use AWS EC2 instances with Graviton processors, or Linux virtual machines from any other cloud service provider.
14+
15+
This tutorial walks you through a local-to-cloud test to compare performance between:
1416

1517
* Two cloud-based instances
1618
* One local system and one cloud instance
1719

1820
The setup instructions below use AWS EC2 instances connected within a Virtual Private Cloud (VPC).
1921

20-
To get started, create two Arm-based Linux instances, with each instance serving one role:
22+
To get started, create two Arm-based Linux instances, with each instance serving a distinct role:
2123

22-
* One acting as a server
2324
* One acting as a client
25+
* One acting as a server
2426

2527
The instructions below use two `t4g.xlarge` instances running Ubuntu 24.04 LTS.
2628

@@ -43,15 +45,15 @@ If you're prompted to run `iperf3` as a daemon, answer "no".
4345

4446
If you're working in a cloud environment like AWS, you must update the default security rules to enable specific inbound and outbound protocols.
4547

46-
Using the AWS console, follow these instructions:
48+
To do this, follow these instructions below using the AWS console:
4749

4850
* Navigate to the **Security** tab for each instance.
49-
* Edit the **Inbound rules** to allow the following protocols:
51+
* Configure the **Inbound rules** to allow the following protocols:
5052
* `ICMP` (for ping)
5153
* All UDP ports (for UDP tests)
5254
* TCP port 5201 (for traffic to enable communication between the client and server systems)
5355

54-
![example_traffic#center](./example_traffic_rules.png "Example traffic")
56+
![example_traffic#center](./example_traffic_rules.png "AWS console view")
5557

5658
{{% notice Warning %}}
5759
For secure internal communication, set the source to your instance’s security group. This avoids exposing traffic to the internet while allowing traffic between your systems.
@@ -67,7 +69,7 @@ You can restrict the range further by:
6769

6870
To avoid using IP addresses directly, add the other system's IP address to the `/etc/hosts` file.
6971

70-
You can find private IPs in the AWS dashboard or by running:
72+
You can find private IPs in the AWS dashboard, or by running:
7173

7274
```bash
7375
hostname -I
@@ -77,7 +79,7 @@ ifconfig
7779

7880
### On the client
7981

80-
Add the server's IP address and assign it the name `SERVER`:
82+
Add the server's IP address, and assign it the name `SERVER`:
8183

8284
```output
8385
127.0.0.1 localhost
@@ -86,7 +88,7 @@ Add the server's IP address and assign it the name `SERVER`:
8688

8789
### On the server
8890

89-
Add the client's IP address and assign it the name `CLIENT`:
91+
Add the client's IP address, and assign it the name `CLIENT`:
9092

9193
```output
9294
127.0.0.1 localhost
@@ -101,17 +103,17 @@ Add the client's IP address and assign it the name `CLIENT`:
101103

102104

103105

104-
## Confirm server is reachable
106+
## Confirm the server is reachable
105107

106-
Finally, confirm the client can reach the server by using the ping command below. As a reference, you can also ping the localhost.
108+
Finally, confirm the client can reach the server by using the ping command below. If required, you can also ping the localhost:
107109

108110
```bash
109111
ping SERVER -c 3 && ping 127.0.0.1 -c 3
110112
```
111113

112114
The output below shows that both SERVER and localhost (127.0.0.1) are reachable.
113115

114-
Localhost response times are typically ~10× faster than remote systems, though actual values will vary based on system location and network conditions.
116+
Localhost response times are typically ~10× faster than remote systems, though actual values vary based on system location and network conditions.
115117

116118
```output
117119
PING SERVER (10.248.213.104) 56(84) bytes of data.
@@ -132,4 +134,4 @@ PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
132134
rtt min/avg/max/mdev = 0.022/0.027/0.032/0.004 ms
133135
```
134136

135-
Now that your systems are configured, you can move on to the next section to learn how to measure the network bandwidth between the systems.
137+
Now that your systems are configured, the next step is to measure the available network bandwidth between them.

content/learning-paths/servers-and-cloud-computing/microbenchmark-network-iperf3/simulating-network-conditions.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ sudo tc qdisc add dev ens5 root netem delay 10ms
4545
Rerun the basic TCP test as before on the client:
4646

4747
```bash
48-
iperf3 -c SERVER -V
48+
iperf3 -c SERVER -v
4949
```
5050

5151
```output
@@ -95,7 +95,7 @@ sudo tc qdisc add dev ens5 root netem loss 1%
9595
Now rerunning the basic TCP test, and you will see an increased number of retries (`Retr`) and a corresponding drop in bitrate:
9696

9797
```bash
98-
iperf3 -c SERVER -V
98+
iperf3 -c SERVER -v
9999
```
100100

101101
The output is now:
@@ -112,9 +112,7 @@ Test Complete. Summary Results:
112112
The tc tool can simulate:
113113

114114
* Variable latency and jitter
115-
116115
* Packet duplication or reordering
117-
118116
* Bandwidth throttling
119117

120118
For advanced options, refer to Refer to the [tc man page](https://man7.org/linux/man-pages/man8/tc.8.html).

content/learning-paths/servers-and-cloud-computing/microbenchmark-network-iperf3/tuning.md

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ weight: 5
66
layout: learningpathall
77
---
88

9-
You can further optimize network performance by adjusting Linux kernel parameters and testing across different environmentsincluding local-to-cloud scenarios.
9+
You can further optimize network performance by adjusting Linux kernel parameters and testing across different environments, including local-to-cloud scenarios.
1010

1111
## Connect from a local machine
1212

@@ -24,7 +24,7 @@ Running iPerf3 on the local machine and connecting to the cloud server shows a l
2424
Run this command on your local computer:
2525

2626
```bash
27-
iperf3 -c <server-public-IP> -V
27+
iperf3 -c <server-public-IP> -v
2828
```
2929

3030
Compared to over 2 Gbit/sec within AWS, this test shows a reduced bitrate (~157 Mbit/sec) due to longer round-trip times (for example, >40ms).
@@ -75,7 +75,7 @@ iperf3 -s
7575
Now rerun iPerf3 again on your local machine:
7676

7777
```bash
78-
iperf3 -c <server-public-IP> -V
78+
iperf3 -c <server-public-IP> -v
7979
```
8080

8181
Without changing anything on the client, the throughput improved by over 60%.
@@ -91,9 +91,7 @@ Test Complete. Summary Results:
9191
You’ve now completed a guided introduction to:
9292

9393
* Network performance microbenchmarking
94-
9594
* Simulating real-world network conditions
96-
9795
* Tuning kernel parameters for high-latency links
9896

99-
* Explore further by testing other parameters, tuning for specific congestion control algorithms, or integrating these benchmarks into CI pipelines for continuous performance evaluation.
97+
You can now explore this area further by testing other parameters, tuning for specific congestion control algorithms, or integrating these benchmarks into CI pipelines for continuous performance evaluation.

0 commit comments

Comments
 (0)