Skip to content

Commit 84f3ce0

Browse files
Merge pull request #2019 from madeline-underwood/iperf3-LP
Iperf3 lp_JA to review
2 parents 1622f8d + ea07df3 commit 84f3ce0

File tree

5 files changed

+184
-89
lines changed

5 files changed

+184
-89
lines changed

content/learning-paths/servers-and-cloud-computing/microbenchmark-network-iperf3/_index.md

Lines changed: 8 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,17 @@
11
---
2-
title: Get started with network microbenchmarking and tuning with iperf3
3-
4-
draft: true
5-
cascade:
6-
draft: true
2+
title: Microbenchmark and tune network performance with iPerf3 and Linux traffic control
73

84
minutes_to_complete: 30
95

10-
who_is_this_for: This is an introductory topic for performance engineers, Linux system administrators, or application developers who want to microbenchmark, simulate, or tune the networking performance of distributed systems.
6+
who_is_this_for: This is an introductory topic for performance engineers, Linux system administrators, and application developers who want to microbenchmark, simulate, or tune the networking performance of distributed systems.
117

128
learning_objectives:
13-
- Understand how to use iperf3 and tc for network performance testing and traffic control to microbenchmark different network conditions.
14-
- Identify and apply basic runtime parameters to tune application performance.
9+
- Run accurate network microbenchmark tests using iPerf3.
10+
- Simulate real-world network conditions using Linux Traffic Control (tc).
11+
- Tune basic Linux kernel parameters to improve network performance.
1512

1613
prerequisites:
17-
- Foundational understanding of networking principles such as TCP/IP and UDP.
14+
- Basic understanding of networking principles such as Transmission Control Protocol/Internet Protocol (TCP/IP) and User Datagram Protocol (UDP).
1815
- Access to two [Arm-based cloud instances](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/).
1916

2017
author: Kieran Hejmadi
@@ -25,13 +22,13 @@ subjects: Performance and Architecture
2522
armips:
2623
- Neoverse
2724
tools_software_languages:
28-
- iperf3
25+
- iPerf3
2926
operatingsystems:
3027
- Linux
3128

3229
further_reading:
3330
- resource:
34-
title: iperf3 user manual
31+
title: iPerf3 user manual
3532
link: https://iperf.fr/iperf-doc.php
3633
type: documentation
3734

content/learning-paths/servers-and-cloud-computing/microbenchmark-network-iperf3/basic-microbenchmarking.md

Lines changed: 46 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -6,17 +6,19 @@ weight: 3
66
layout: learningpathall
77
---
88

9-
## Microbenchmark the TCP connection
9+
With your systems configured and reachable, you can now use iPerf3 to microbenchmark TCP and UDP performance between your Arm-based systems.
1010

11-
You can microbenchmark the bandwidth between the client and server.
11+
## Microbenchmark the TCP connection
1212

13-
First, start `iperf` in server mode on the server system with the following command:
13+
Start by running `iperf` in server mode on the `SERVER` system:
1414

1515
```bash
1616
iperf3 -s
1717
```
1818

19-
You see the output, indicating the server is ready:
19+
This starts the server on the default TCP port 5201.
20+
21+
You should see:
2022

2123
```output
2224
-----------------------------------------------------------
@@ -25,20 +27,23 @@ Server listening on 5201 (test #1)
2527
2628
```
2729

28-
The default server port is 5201. Use the `-p` flag to specify another port if it is in use.
30+
The default server port is 5201. If it is already in use, use the `-p` flag to specify another.
2931

3032
{{% notice Tip %}}
31-
If you already have an `iperf3` server running, you can manually kill the process with the following command.
33+
If you already have an `iperf3` server running, terminate it with:
3234
```bash
3335
sudo kill $(pgrep iperf3)
3436
```
3537
{{% /notice %}}
3638

37-
Next, on the client node, run the following command to run a simple 10-second microbenchmark using the TCP protocol.
39+
## Run a TCP test from the client
40+
41+
On the client node, run the following command to run a simple 10-second microbenchmark using the TCP protocol:
3842

3943
```bash
40-
iperf3 -c SERVER -V
44+
iperf3 -c SERVER -v
4145
```
46+
Replace `SERVER` with your server’s hostname or private IP address. The `-v` flag enables verbose output.
4247

4348
The output is similar to:
4449

@@ -68,28 +73,47 @@ rcv_tcp_congestion cubic
6873
6974
iperf Done.
7075
```
76+
## TCP result highlights
77+
78+
- The`Cwnd` column prints the control window size and corresponds to the allowed number of TCP transactions in flight before receiving an acknowledgment `ACK` from the server. This value grows as the connection stabilizes and adapts to link quality.
79+
80+
- The `CPU Utilization` row shows both the usage on the sender and receiver. If you are migrating your workload to a different platform, such as from x86 to Arm, this is a useful metric.
81+
82+
- The `snd_tcp_congestion cubic` and `rcv_tcp_congestion cubic` variables show the congestion control algorithm used.
83+
84+
- `Bitrate` shows the throughput achieved. In this example, the the `t4g.xlarge` AWS instance saturates its 5 Gbps bandwidth available.
7185

72-
- The`Cwnd` column prints the control window size and corresponds to the allowed number of TCP transactions in flight before receiving an acknowledgment `ACK` from the server. This adjusts dynamically to not overwhelm the receiver and adjust for variable link connection strengths.
86+
![instance-network-size#center](./instance-network-size.png "Instance network size")
7387

74-
- The `CPU Utilization` row shows both the usage on the sender and receiver. If you are migrating your workload to a different platform, such as from x86 to Arm, there may be variations.
88+
## UDP result highlights
7589

76-
- The `snd_tcp_congestion cubic` abd `rcv_tcp_congestion cubic` variables show the congestion control algorithm used.
90+
You can also microbenchmark the `UDP` protocol using the `-u` flag with iPerf3. Unlike TCP, UDP does not guarantee packet delivery which means some packets might be lost in transit.
7791

78-
- This `bitrate` shows the throughput achieved under this microbenchmark. As you can see, the 5 Gbps bandwidth available to the `t4g.xlarge` AWS instance is saturated.
92+
To evaluate UDP performance, focus on the server-side statistics, particularly:
7993

80-
![instance-network-size](./instance-network-size.png)
94+
* Packet loss percentage
8195

82-
### Microbenchmark UDP connection
96+
* Jitter (variation in packet arrival time)
8397

84-
You can also microbenchmark the `UDP` protocol with the `-u` flag. As a reminder, UDP does not guarantee packet delivery with some packets being lost. As such you need to observe the statistics on the server side to see the percent of packets lost and the variation in packet arrival time (jitter). The UDP protocol is widely used in applications that need timely packet delivery, such as online gaming and video calls.
98+
These metrics help assess reliability and responsiveness under real-time conditions.
8599

86-
Run the following command from the client to send 2 parallel UDP streams with the `-P 2` option.
100+
UDP is commonly used in latency-sensitive applications such as:
101+
102+
* Online gaming
103+
104+
* Voice over IP (VoIP)
105+
106+
* Video conferencing and streaming
107+
108+
Because it avoids the overhead of retransmission and ordering, UDP is ideal for scenarios where timely delivery matters more than perfect accuracy.
109+
110+
Run the following command from the client to send two parallel UDP streams with the `-P 2` option:
87111

88112
```bash
89-
iperf3 -c SERVER -V -u -P 2
113+
iperf3 -c SERVER -v -u -P 2
90114
```
91115

92-
Looking at the server output you observe 0% of packets where lost for the short test.
116+
Look at the server output and you can see that none (0%) of packets were lost for the short test:
93117

94118
```output
95119
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
@@ -98,8 +122,10 @@ Looking at the server output you observe 0% of packets where lost for the short
98122
[SUM] 0.00-10.00 sec 2.51 MBytes 2.10 Mbits/sec 0.015 ms 0/294 (0%) receiver
99123
```
100124

101-
Additionally on the client side, the 2 streams saturated 2 of the 4 cores in the system.
125+
Additionally on the client side, the two streams saturated two of the four cores in the system:
102126

103127
```output
104128
CPU Utilization: local/sender 200.3% (200.3%u/0.0%s), remote/receiver 0.2% (0.0%u/0.2%s)
105-
```
129+
```
130+
131+
This demonstrates that UDP throughput is CPU-bound when pushing multiple streams.
Lines changed: 70 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1,72 +1,118 @@
11
---
2-
title: Prepare for network performance testing
2+
title: Set up Arm-based Linux systems for network performance testing with iPerf3
33
weight: 2
44

55
### FIXED, DO NOT MODIFY
66
layout: learningpathall
77
---
88

9-
## Configure two Arm-based Linux computers
9+
## Environment setup and Learning Path focus
1010

11-
To perform network performance testing you need two Linux computers. You can use AWS EC2 instances with Graviton processors or any other Linux virtual machines from another cloud service provider.
11+
To benchmark bandwidth and latency between Arm-based systems, you'll need to configure two Linux machines running on Arm.
1212

13-
You will also experiment with a local computer and a cloud instance to learn the networking performance differences compared to two cloud instances.
13+
You can use AWS EC2 instances with Graviton processors, or Linux virtual machines from any other cloud service provider.
1414

15-
The instructions below use EC2 instances from AWS connected in a virtual private cloud (VPC).
15+
This tutorial walks you through a local-to-cloud test to compare performance between:
1616

17-
To get started, create two Arm-based Linux instances, one system to act as the server and the other to act as the client. The instructions below use two `t4g.xlarge` instances running Ubuntu 24.04 LTS.
17+
* Two cloud-based instances
18+
* One local system and one cloud instance
1819

19-
### Install software dependencies
20+
The setup instructions below use AWS EC2 instances connected within a Virtual Private Cloud (VPC).
2021

21-
Use the commands below to install `iperf3`, a powerful and flexible open-source command-line tool used for network performance measurement and tuning. It allows network administrators and engineers to actively measure the maximum achievable bandwidth on IP networks.
22+
To get started, create two Arm-based Linux instances, with each instance serving a distinct role:
2223

23-
Run the following on both systems:
24+
* One acting as a client
25+
* One acting as a server
26+
27+
The instructions below use two `t4g.xlarge` instances running Ubuntu 24.04 LTS.
28+
29+
## Install software dependencies
30+
31+
Use the commands below to install iPerf3, which is a powerful open-source CLI tool for measuring maximum achievable network bandwidth.
32+
33+
Begin by installing iPerf3 on both the client and server systems:
2434

2535
```bash
2636
sudo apt update
2737
sudo apt install iperf3 -y
2838
```
2939

3040
{{% notice Note %}}
31-
If you are prompted to start `iperf3` as a daemon you can answer no.
41+
If you're prompted to run `iperf3` as a daemon, answer "no".
3242
{{% /notice %}}
3343

34-
## Update Security Rules
44+
## Update security rules
3545

36-
If you are working in a cloud environment like AWS, you need to update the default security rules to enable specific inbound and outbound protocols.
46+
If you're working in a cloud environment like AWS, you must update the default security rules to enable specific inbound and outbound protocols.
3747

38-
From the AWS console, navigate to the security tab. Edit the inbound rules to enable `ICMP`, `UDP` and `TCP` traffic to enable communication between the client and server systems.
48+
To do this, follow these instructions below using the AWS console:
3949

40-
![example_traffic](./example_traffic_rules.png)
50+
* Navigate to the **Security** tab for each instance.
51+
* Configure the **Inbound rules** to allow the following protocols:
52+
* `ICMP` (for ping)
53+
* All UDP ports (for UDP tests)
54+
* TCP port 5201 (for traffic to enable communication between the client and server systems)
4155

42-
{{% notice Note %}}
43-
For additional security set the source and port ranges to the values being used. A good solution is to open TCP port 5201 and all UDP ports and use your security group as the source. This doesn't open any traffic from outside AWS.
56+
![example_traffic#center](./example_traffic_rules.png "AWS console view")
57+
58+
{{% notice Warning %}}
59+
For secure internal communication, set the source to your instance’s security group. This avoids exposing traffic to the internet while allowing traffic between your systems.
60+
61+
You can restrict the range further by:
62+
63+
* Opening only TCP port 5201
64+
65+
* Allowing all UDP ports (or a specific range)
4466
{{% /notice %}}
4567

4668
## Update the local DNS
4769

48-
To avoid using IP addresses directly, add the IP address of the other system to the `/etc/hosts` file.
70+
To avoid using IP addresses directly, add the other system's IP address to the `/etc/hosts` file.
4971

50-
The local IP address of the server and client can be found in the AWS dashboard. You can also use commands like `ifconfig`, `hostname -I`, or `ip address` to find your local IP address.
72+
You can find private IPs in the AWS dashboard, or by running:
73+
74+
```bash
75+
hostname -I
76+
ip address
77+
ifconfig
78+
```
79+
## On the client
5180

52-
On the client, add the IP address of the server to the `/etc/hosts` file with name `SERVER`.
81+
Add the server's IP address, and assign it the name `SERVER`:
5382

5483
```output
5584
127.0.0.1 localhost
5685
10.248.213.104 SERVER
5786
```
5887

59-
Repeat the same thing on the server and add the IP address of the client to the `/etc/hosts` file with the name `CLIENT`.
88+
## On the server
89+
90+
Add the client's IP address, and assign it the name `CLIENT`:
91+
92+
```output
93+
127.0.0.1 localhost
94+
10.248.213.105 CLIENT
95+
```
6096

61-
## Confirm server is reachable
97+
| Instance Name | Role | Description |
98+
|---------------|--------|------------------------------------|
99+
| SERVER | Server | Runs `iperf3` in listen mode |
100+
| CLIENT | Client | Initiates performance tests |
62101

63-
Finally, confirm the client can reach the server with the ping command below. As a reference you can also ping the localhost.
102+
103+
104+
105+
## Confirm the server is reachable
106+
107+
Finally, confirm the client can reach the server by using the ping command below. If required, you can also ping the localhost:
64108

65109
```bash
66110
ping SERVER -c 3 && ping 127.0.0.1 -c 3
67111
```
68112

69-
The output below shows that both SERVER and localhost (127.0.0.1) are reachable. Naturally, the local host response time is ~10x faster than the server. Your results will vary depending on geographic location of the systems and other networking factors.
113+
The output below shows that both SERVER and localhost (127.0.0.1) are reachable.
114+
115+
Localhost response times are typically ~10× faster than remote systems, though actual values vary based on system location and network conditions.
70116

71117
```output
72118
PING SERVER (10.248.213.104) 56(84) bytes of data.
@@ -87,4 +133,4 @@ PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
87133
rtt min/avg/max/mdev = 0.022/0.027/0.032/0.004 ms
88134
```
89135

90-
Continue to the next section to learn how to measure the network bandwidth between the systems.
136+
Now that your systems are configured, the next step is to measure the available network bandwidth between them.

0 commit comments

Comments
 (0)