Skip to content

Commit 9a939f6

Browse files
authored
Merge pull request #301256 from mabicca/patch-24
Update virtual-network-optimize-network-bandwidth.md
2 parents 04893cb + a48a36c commit 9a939f6

File tree

1 file changed

+21
-36
lines changed

1 file changed

+21
-36
lines changed

articles/virtual-network/virtual-network-optimize-network-bandwidth.md

Lines changed: 21 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -47,53 +47,34 @@ For all other Windows VMs, using Receive Side Scaling (RSS) can reach higher max
4747

4848
## Linux virtual machines
4949

50-
RSS is always enabled by default in an Azure Linux VM. Linux kernels released since October 2017 include new network optimizations options that enable a Linux VM to achieve higher network throughput.
50+
RSS is always enabled by default in an Azure Linux Virtual Machine (VM). Linux kernels released since October 2017 include new network optimizations options that enable a Linux VM to achieve higher network throughput.
5151

52-
### Ubuntu for new deployments
52+
### Enable Azure Accelerated Networking for optimal throughput
5353

54-
The Ubuntu on Azure kernel is heavily optimized for excellent network performance on Azure. Currently, all Ubuntu images by Canonical come by default with the optimized Azure kernel installed.
54+
Azure provides accelerated networking which can really improve network performance, latency, jitter. There are currently two different technologies that are used depending on the virtual machine size, [Mellanox](/azure/virtual-network/accelerated-networking-how-it-works) which is wide available and [MANA](/azure/virtual-network/accelerated-networking-mana-overview) which is developed by Microsoft.
5555

56-
Use the following command to make sure that you're using the Azure kernel, which has `-azure` at the end of the version.
56+
### Azure Linux Tuned Kernels
5757

58-
```bash
59-
uname -r
58+
Some distributions such as Ubuntu (Canonical) and SUSE have [Azure tuned kernels](/azure/virtual-machines/linux/endorsed-distros#azure-tuned-kernels).
6059

61-
#sample output on Azure kernel:
62-
6.8.0-1017-azure
63-
```
64-
65-
#### Ubuntu on Azure kernel upgrade for existing VMs
66-
67-
You can get significant throughput performance by upgrading to the Azure Linux kernel. To verify whether you have this kernel, check your kernel version. It should be the same or later than the example.
60+
Use the following command to make sure that you're using the Azure kernel, which has usually the `azure` string in the naming.
6861

6962
```bash
70-
#Azure kernel name ends with "-azure"
7163
uname -r
7264

73-
#sample output on Azure kernel:
74-
#4.13.0-1007-azure
75-
```
76-
77-
If your VM doesn't have the Azure kernel, the version number usually begins with 4.4. If the VM doesn't have the Azure kernel, run the following commands as root:
78-
79-
```bash
80-
#run as root or preface with sudo
81-
sudo apt-get update
82-
sudo apt-get upgrade -y
83-
sudo apt-get dist-upgrade -y
84-
sudo apt-get install "linux-azure"
85-
sudo reboot
65+
#sample output on Azure kernel on a Ubuntu Linux VM
66+
6.8.0-1017-azure
8667
```
8768

88-
### Other distributions
69+
### Other Linux distributions
8970

90-
Most modern distributions should have significant improvements with kernels newer than 4.19+. Check the current kernel version to make sure that you're running a newer kernel.
71+
Most modern distributions have significant improvements with newer kernels. Check the current kernel version to make sure that you're running a kernel that is newer than 4.19, which includes some great improvements in networking, for example support for the *BBR Congestion-Based Congestion Control*.
9172

92-
## Optimizing cross-region transfer speeds in Azure Linux VMs
73+
## Achieving consistent transfer speeds in Azure Linux VMs
9374

94-
Azure Linux VMs often experience network performance issues, particularly when transferring large files (1 GB to 50 GB) between regions, such as West Europe and West US. These issues are caused by generic kernel configurations, network buffer settings, and default congestion control algorithms, which result in delayed packets, limited throughput, and inefficient resource usage.
75+
Azure Linux VMs often experience network performance issues, particularly when transferring large files (1 GB to 50 GB) between regions, such as West Europe and West US. These issues are caused by older kernel versions as well as, default kernel configurations, default network buffer settings and default congestion control algorithms, which result in delayed packets, limited throughput, and inefficient resource usage.
9576

96-
To enhance network performance, consider implementing the following optimizations that are proven effective in many situations on Azure:
77+
To get consistent network performance, consider implementing the following optimizations that are proven effective in many situations on Azure:
9778

9879
- **Network buffer settings**: Adjust kernel parameters to maximize read and write memory buffers. Add these configurations to `/etc/sysctl.d/99-azure-network-buffers.conf`:
9980

@@ -112,7 +93,7 @@ net.core.busy_poll = 50
11293
net.core.busy_read = 50
11394
```
11495

115-
- **Congestion control for kernels 4.19+**: Enabling Bottleneck Bandwidth and Round-trip propagation time (BBR) congestion control can often result in better throughput. Add this configuration to `/etc/sysctl.d/99-azure-congestion-control.conf`:
96+
- **Congestion-Based Congestion control for kernels 4.19 and above**: Enabling Bottleneck Bandwidth and Round-trip propagation time (BBR) congestion control can often result in better throughput. Add this configuration to `/etc/sysctl.d/99-azure-congestion-control.conf`:
11697

11798
```plaintext
11899
net.ipv4.tcp_congestion_control = bbr
@@ -168,10 +149,14 @@ ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="enP*", PROGRAM="/sbin/tc qdisc
168149
ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="eth*", PROGRAM="/sbin/tc qdisc replace dev \$env{INTERFACE} root fq“
169150
```
170151

171-
- **Interrupt Request (IRQ) scheduling**: Depending on your workload, you may wish to restrict the irqbalance service from scheduling IRQs on certain nodes. Update `/etc/default/irqbalance` to specify which CPUs shouldn't have IRQs scheduled:
152+
- **Interrupt Request (IRQ) scheduling**: Depending on your workload, you may wish to restrict the irqbalance service from scheduling IRQs on certain nodes. When using IRQBalance, you can update `/etc/default/irqbalance` to specify which CPUs shouldn't have IRQs scheduled you will need to determine [the mask](https://manpages.debian.org/testing/irqbalance/irqbalance.1.en.html#IRQBALANCE_BANNED_CPUS) that will exclude the CPUs that need exclusion.
153+
154+
More information about how to calculate the mask available [here](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/7/html/tuning_guide/interrupt_and_process_binding).
155+
156+
The example below would assume that you want to exclude CPUs 8-15
172157

173158
```bash
174-
IRQBALANCE_BANNED_CPULIST=0-2
159+
IRQBALANCE_BANNED_CPULIST=0000ff00
175160
```
176161

177162
- **UDEV rules**: Add rules to optimize queue length and manage device flags efficiently. Create the following rule in `/etc/udev/rules.d/99-azure-txqueue-len.rules`:
@@ -180,7 +165,7 @@ IRQBALANCE_BANNED_CPULIST=0-2
180165
SUBSYSTEM=="net", ACTION=="add|change", KERNEL=="eth*", ATTR{tx_queue_len}="10000“
181166
```
182167

183-
### For Packets delayed twice
168+
### For Packets delayed twice
184169

185170
When it comes to Linux performance networking we use SR-IOV with Mellanox drivers (mlx4 or mlx5), something specific to Azure is that this creates two interfaces a synthetic and a virtual interface. [Learn More](/azure/virtual-network/accelerated-networking-how-it-works).
186171

0 commit comments

Comments
 (0)