You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/virtual-network/virtual-network-optimize-network-bandwidth.md
+21-36Lines changed: 21 additions & 36 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,53 +47,34 @@ For all other Windows VMs, using Receive Side Scaling (RSS) can reach higher max
47
47
48
48
## Linux virtual machines
49
49
50
-
RSS is always enabled by default in an Azure Linux VM. Linux kernels released since October 2017 include new network optimizations options that enable a Linux VM to achieve higher network throughput.
50
+
RSS is always enabled by default in an Azure Linux Virtual Machine (VM). Linux kernels released since October 2017 include new network optimizations options that enable a Linux VM to achieve higher network throughput.
51
51
52
-
### Ubuntu for new deployments
52
+
### Enable Azure Accelerated Networking for optimal throughput
53
53
54
-
The Ubuntu on Azure kernel is heavily optimized for excellent network performance on Azure. Currently, all Ubuntu images by Canonical come by default with the optimized Azure kernel installed.
54
+
Azure provides accelerated networking which can really improve network performance, latency, jitter. There are currently two different technologies that are used depending on the virtual machine size, [Mellanox](/azure/virtual-network/accelerated-networking-how-it-works) which is wide available and [MANA](/azure/virtual-network/accelerated-networking-mana-overview) which is developed by Microsoft.
55
55
56
-
Use the following command to make sure that you're using the Azure kernel, which has `-azure` at the end of the version.
56
+
### Azure Linux Tuned Kernels
57
57
58
-
```bash
59
-
uname -r
58
+
Some distributions such as Ubuntu (Canonical) and SUSE have [Azure tuned kernels](/azure/virtual-machines/linux/endorsed-distros#azure-tuned-kernels).
60
59
61
-
#sample output on Azure kernel:
62
-
6.8.0-1017-azure
63
-
```
64
-
65
-
#### Ubuntu on Azure kernel upgrade for existing VMs
66
-
67
-
You can get significant throughput performance by upgrading to the Azure Linux kernel. To verify whether you have this kernel, check your kernel version. It should be the same or later than the example.
60
+
Use the following command to make sure that you're using the Azure kernel, which has usually the `azure` string in the naming.
68
61
69
62
```bash
70
-
#Azure kernel name ends with "-azure"
71
63
uname -r
72
64
73
-
#sample output on Azure kernel:
74
-
#4.13.0-1007-azure
75
-
```
76
-
77
-
If your VM doesn't have the Azure kernel, the version number usually begins with 4.4. If the VM doesn't have the Azure kernel, run the following commands as root:
78
-
79
-
```bash
80
-
#run as root or preface with sudo
81
-
sudo apt-get update
82
-
sudo apt-get upgrade -y
83
-
sudo apt-get dist-upgrade -y
84
-
sudo apt-get install "linux-azure"
85
-
sudo reboot
65
+
#sample output on Azure kernel on a Ubuntu Linux VM
66
+
6.8.0-1017-azure
86
67
```
87
68
88
-
### Other distributions
69
+
### Other Linux distributions
89
70
90
-
Most modern distributions should have significant improvements with kernels newer than 4.19+. Check the current kernel version to make sure that you're running a newer kernel.
71
+
Most modern distributions have significant improvements with newer kernels. Check the current kernel version to make sure that you're running a kernel that is newer than 4.19, which includes some great improvements in networking, for example support for the *BBR Congestion-Based Congestion Control*.
91
72
92
-
## Optimizing cross-region transfer speeds in Azure Linux VMs
73
+
## Achieving consistent transfer speeds in Azure Linux VMs
93
74
94
-
Azure Linux VMs often experience network performance issues, particularly when transferring large files (1 GB to 50 GB) between regions, such as West Europe and West US. These issues are caused by generic kernel configurations, network buffer settings, and default congestion control algorithms, which result in delayed packets, limited throughput, and inefficient resource usage.
75
+
Azure Linux VMs often experience network performance issues, particularly when transferring large files (1 GB to 50 GB) between regions, such as West Europe and West US. These issues are caused by older kernel versions as well as, default kernel configurations, default network buffer settings and default congestion control algorithms, which result in delayed packets, limited throughput, and inefficient resource usage.
95
76
96
-
To enhance network performance, consider implementing the following optimizations that are proven effective in many situations on Azure:
77
+
To get consistent network performance, consider implementing the following optimizations that are proven effective in many situations on Azure:
97
78
98
79
-**Network buffer settings**: Adjust kernel parameters to maximize read and write memory buffers. Add these configurations to `/etc/sysctl.d/99-azure-network-buffers.conf`:
99
80
@@ -112,7 +93,7 @@ net.core.busy_poll = 50
112
93
net.core.busy_read = 50
113
94
```
114
95
115
-
-**Congestioncontrol for kernels 4.19+**: Enabling Bottleneck Bandwidth and Round-trip propagation time (BBR) congestion control can often result in better throughput. Add this configuration to `/etc/sysctl.d/99-azure-congestion-control.conf`:
96
+
-**Congestion-Based Congestion control for kernels 4.19 and above**: Enabling Bottleneck Bandwidth and Round-trip propagation time (BBR) congestion control can often result in better throughput. Add this configuration to `/etc/sysctl.d/99-azure-congestion-control.conf`:
ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="eth*", PROGRAM="/sbin/tc qdisc replace dev \$env{INTERFACE} root fq“
169
150
```
170
151
171
-
-**Interrupt Request (IRQ) scheduling**: Depending on your workload, you may wish to restrict the irqbalance service from scheduling IRQs on certain nodes. Update `/etc/default/irqbalance` to specify which CPUs shouldn't have IRQs scheduled:
152
+
-**Interrupt Request (IRQ) scheduling**: Depending on your workload, you may wish to restrict the irqbalance service from scheduling IRQs on certain nodes. When using IRQBalance, you can update `/etc/default/irqbalance` to specify which CPUs shouldn't have IRQs scheduled you will need to determine [the mask](https://manpages.debian.org/testing/irqbalance/irqbalance.1.en.html#IRQBALANCE_BANNED_CPUS) that will exclude the CPUs that need exclusion.
153
+
154
+
More information about how to calculate the mask available [here](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/7/html/tuning_guide/interrupt_and_process_binding).
155
+
156
+
The example below would assume that you want to exclude CPUs 8-15
172
157
173
158
```bash
174
-
IRQBALANCE_BANNED_CPULIST=0-2
159
+
IRQBALANCE_BANNED_CPULIST=0000ff00
175
160
```
176
161
177
162
-**UDEV rules**: Add rules to optimize queue length and manage device flags efficiently. Create the following rule in `/etc/udev/rules.d/99-azure-txqueue-len.rules`:
When it comes to Linux performance networking we use SR-IOV with Mellanox drivers (mlx4 or mlx5), something specific to Azure is that this creates two interfaces a synthetic and a virtual interface. [Learn More](/azure/virtual-network/accelerated-networking-how-it-works).
0 commit comments