You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/microbenchmark-network-iperf3/Setup.md
+9-4Lines changed: 9 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ layout: learningpathall
10
10
11
11
For this demonstration I will be using instances available from AWS within a virtual private cloud (VPC)
12
12
13
-
Create 2 Arm-based linux instances, 1 to act as the server and the other to act as the client. In this tutorial I will be using an`t4g.xlarge` instance running Ubuntu 22.04 LTS.
13
+
Create 2 Arm-based linux instances, 1 to act as the server and the other to act as the client. In this tutorial I will be using two`t4g.xlarge` instance running Ubuntu 22.04 LTS.
14
14
15
15
16
16
### Install dependencies
@@ -25,22 +25,27 @@ sudo apt install iperf3 -y
25
25
26
26
### Update Security Rules
27
27
28
-
Next, we need to update the default security rules to enable specific inbound and outbound protocols. From the AWS console, navigate to the security tab. Edit the inbound rules to enable `ICMP`, `UDP` and `TCP` traffic. Please note: for security we recommend updating the source to be the IP addresses of the client and server respectively.
28
+
Next, we need to update the default security rules to enable specific inbound and outbound protocols. From the AWS console, navigate to the security tab. Edit the inbound rules to enable `ICMP`, `UDP` and `TCP` traffic to enable communication between the client and server
29
+
29
30
30
31

31
32
33
+
{{% notice Note %}}
34
+
For security set the source and port ranges to those that are being used
35
+
{{% /notice %}}
36
+
32
37
33
38
### Update local DNS
34
39
35
-
For readability, we will add the server IP address and an alias name to the local DNS cache in `/etc/hosts`.
40
+
For readability, we will add the server IP address and an alias to the local DNS cache in `/etc/hosts`. The local IP address of the server and client can be found in the AWS dashboard.
36
41
37
42
On the client, add the IP address of the server to the `/etc/hosts` file. Likewise on the server add the IP address of the client to the `/etc/hosts` file.
38
43
39
44
.
40
45
41
46
### Confirm server is reachable
42
47
43
-
Finally, confirm the client can reach the server with the ping command below.
48
+
Finally, confirm the client can reach the server with the ping command below. As a reference we also ping the localhost.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/microbenchmark-network-iperf3/_index.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Get started with network microbenchmarking and tuning with iperf3
3
3
4
4
minutes_to_complete: 30
5
5
6
-
who_is_this_for: Network engineers, sys admins or application developers
6
+
who_is_this_for: Performance Engineers, Linux system administrators or application developers looking to microbenchmark, simulate or tune the networking performance of distributed systems.
7
7
8
8
learning_objectives:
9
9
- Understand how to use the iperf3 tool to microbenchmark different network conditions
@@ -12,7 +12,7 @@ learning_objectives:
12
12
13
13
prerequisites:
14
14
- Foundational understanding on networking principles such as TCP/IP and UDP.
15
-
- Access to change inbound and outbound security rules or access to physical hardware
15
+
- Access to Arm-based cloud instances or access to physical hardware
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/microbenchmark-network-iperf3/basic-microbenchmarking.md
+15-11Lines changed: 15 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ weight: 3
6
6
layout: learningpathall
7
7
---
8
8
9
-
## Microbenchmark Existing Network Connection
9
+
## Microbenchmark TCP Connection
10
10
11
11
12
12
First we will microbenchmark the bandwidth between the client and server. First start the `iperf` server on the server node with the following command.
@@ -24,11 +24,15 @@ Server listening on 5201 (test #1)
24
24
```
25
25
By default, the server listens on port 5201. Use the `-p` flag to specify another port if it is in use.
26
26
27
-
Please note: If you're unable to start the `iperf` server, kill the process with the following command: `sudo kill $(pgrep iperf3)`
27
+
{{% notice Tip %}}
28
+
If you already have an `iperf3` server running, you can manually kill the process with the following command.
29
+
```bash
30
+
sudo kill$(pgrep iperf3)
31
+
```
32
+
{{% /notice %}}
28
33
29
-
### Running Basic TCP Microbenchmark
30
34
31
-
Likewise, on the client node, run the following command to run a simple 10-second microbenchmark using the TCP protocol.
35
+
Next, on the client node, run the following command to run a simple 10-second microbenchmark using the TCP protocol.
32
36
33
37
```bash
34
38
iperf3 -c SERVER -V
@@ -61,19 +65,19 @@ rcv_tcp_congestion cubic
61
65
iperf Done.
62
66
```
63
67
64
-
The important information to observe is `Cwnd`, this stands for the control window size and corresponds to the allowed number of TCP transactions inflight before receiving an acknowledgment `ACK` from the server. This adjusts dynamically to not overwhelm the receiver and adjust for variable link connection strengths.
68
+
- The`Cwnd` stands for the control window size and corresponds to the allowed number of TCP transactions inflight before receiving an acknowledgment `ACK` from the server. This adjusts dynamically to not overwhelm the receiver and adjust for variable link connection strengths.
65
69
66
-
The `CPU Utilization` row shows both the usage on the sender and receiver. If you are migrating your workload to a different platform, such as from `x86` to `AArch64`, there may be subtle variations.
70
+
-The `CPU Utilization` row shows both the usage on the sender and receiver. If you are migrating your workload to a different platform, such as from `x86` to `AArch64`, there may be subtle variations.
67
71
68
-
Finally the`snd_tcp_congestion cubic` abd `rcv_tcp_congestion cubic` variables show the congestion control algorithm used.
72
+
- The`snd_tcp_congestion cubic` abd `rcv_tcp_congestion cubic` variables show the congestion control algorithm used.
69
73
70
-
This test has saturated the 5 Gbps bandwidth available to our `t4g.xlarge` AWS instance.
74
+
-This `bitrate` shows the throughput achieved under this microbenchmark. As we can see from the above, we have saturated the 5 Gbps bandwidth available to our `t4g.xlarge` AWS instance.
We can also microbenchmark the `UDP` protocol with the `-u` flag. As a reminder, UDP does not guarantee packat delivery with some packets being lost. As such we need to observe the statistics on the server side to see the % of packet lost and the variation in packet arrival time (jitter). The UDP protocol is widely used in applications that need timely packet delivery, such as online gaming on video calls.
80
+
We can also microbenchmark the `UDP` protocol with the `-u` flag. As a reminder, UDP does not guarantee packet delivery with some packets being lost. As such we need to observe the statistics on the server side to see the % of packet lost and the variation in packet arrival time (jitter). The UDP protocol is widely used in applications that need timely packet delivery, such as online gaming on video calls.
77
81
78
82
Run the following command from the client to send 2 parallel UDP streams with the `-P 2` option.
79
83
@@ -90,7 +94,7 @@ Looking at the server output we can observe 0% of packets where lost for our sho
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/microbenchmark-network-iperf3/simulating-network-conditions.md
+5-4Lines changed: 5 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: Simulating Different Scenarios
2
+
title: Simulating Different Network Conditions
3
3
weight: 4
4
4
5
5
### FIXED, DO NOT MODIFY
@@ -8,7 +8,7 @@ layout: learningpathall
8
8
9
9
## Adding a delay to a TCP connection
10
10
11
-
The linux `tc` utility can be used to manipulate traffic control settings. First, find the name of connections with the following command.
11
+
The linux `tc` utility can be used to manipulate traffic control settings. First, find the name of interface with the following command.
12
12
13
13
```bash
14
14
ip addr show
@@ -38,7 +38,7 @@ Run the following command to add an emulated delay of 10ms on `ens5`.
38
38
sudo tc qdisc add dev ens5 root netem delay 10ms
39
39
```
40
40
41
-
Rerunning the basic TCP test with a delay we observe the `Cwnd` size has grew larger to compensate for the longer response time. Additionally, the bitrate has dropped from ~4.9 to ~2.3 `Gbit/sec`.
41
+
Rerunning the basic TCP test (`iperf3 -c SERVER -V`) with a delay we observe the `Cwnd` size has grew larger to compensate for the longer response time. Additionally, the bitrate has dropped from ~4.9 to ~2.3 `Gbit/sec`.
42
42
43
43
44
44
```output
@@ -76,7 +76,7 @@ sudo tc qdisc del dev ens5 root
76
76
sudo tc qdisc add dev ens5 root netem loss 1%
77
77
```
78
78
79
-
Rerunning the basic TCP test we no observe a significant number of retries (`Retr`) and a corresponding drop in bitrate.
79
+
Rerunning the basic TCP test we observe an increased number of retries (`Retr`) and a corresponding drop in bitrate.
Please see the `tc`[user documentation](https://man7.org/linux/man-pages/man8/tc.8.html) for the different ways to simulate different perturbation and your systems resiliency to such events.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/microbenchmark-network-iperf3/tuning.md
+22-8Lines changed: 22 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,10 +6,16 @@ weight: 5
6
6
layout: learningpathall
7
7
---
8
8
9
-
##Base example
9
+
### Connecting from Local Machine
10
10
11
-
In this example I will connect to the server from my remote client that has a higher round trip time.
11
+
Now we can observe ways to mitigate performance degradation due to events such as packet loss. In this example, I will connect to the AWS server node from my local machine to demonstrate a longer response time. Please check the `iperf3` installation guide on the [official documentation](https://iperf.fr/iperf-download.php) if you're not using Ubuntu. As the output below shows we have a larger round trip time in excess of 40ms.
12
12
13
+
```output
14
+
5 packets transmitted, 5 packets received, 0.0% packet loss
15
+
round-trip min/avg/max/stddev = 44.896/46.967/49.279/1.444 ms
16
+
```
17
+
18
+
Running a standard TCP client connection with the `iperf3 -c SERVER -V` command shows an average bitrate of 157 Mbps.
13
19
14
20
```output
15
21
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
@@ -22,7 +28,7 @@ Test Complete. Summary Results:
22
28
23
29
24
30
25
-
## Modify kernel parameters
31
+
###Modify kernel parameters
26
32
27
33
On the server, we can configure linux kernel runtime parameters with the `sysctl` command.
28
34
@@ -32,21 +38,29 @@ There are a plenthora of dials to tune that relate to performance and security.
32
38
sysctl -a | grep tcp
33
39
```
34
40
35
-
Please note: Depending on your operating system, some parameters may not be available. For example on AWS Ubuntu 22.04 LTS only the `cubic` and `reno` algorithms are available (`net.ipv4.tcp_available_congestion_control = reno cubic`).
41
+
{{% notice Note %}}
42
+
Depending on your operating system, some parameters may not be available. For example on AWS Ubuntu 22.04 LTS only the `cubic` and `reno` congestion control algorithms are available.
We can increase the read and write max buffer sizes of the kernel on the server to enable more data to be held. This is at the tradeoff of increased memory utilisation. run the following commands from the server.
Restart the `iperf3` server and rerunning leads to higher achieved bitrate.
56
+
Restart the `iperf3` server. Run the `iperf3 -c SERVER -V` command from the client leads to significantly improved bitrate with no modification on the client side.
This learning path serves as an introduction to microbenchmarking and performance tuning. Which parameters to adjust depends on your own use case and non-functional performance requirements of your system.
0 commit comments