You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/virtual-network/virtual-network-optimize-network-bandwidth.md
+65-2Lines changed: 65 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,7 +51,7 @@ RSS is always enabled by default in an Azure Linux VM. Linux kernels released si
51
51
52
52
### Ubuntu for new deployments
53
53
54
-
The Ubuntu on Azure kernel is the most optimized for network performance on Azure. Currently, all Ubuntu images by Canonical come by default with the optimized Azure kernel installed.
54
+
The Ubuntu on Azure kernel is heavily optimized for excellent network performance on Azure. Currently, all Ubuntu images by Canonical come by default with the optimized Azure kernel installed.
55
55
56
56
Use the following command to make sure that you're using the Azure kernel, which is identified by `-azure` at the end of the version.
57
57
@@ -89,9 +89,72 @@ sudo reboot
89
89
90
90
Most modern distributions should have significant improvements with kernels newer than 4.19+. Check the current kernel version to make sure that you're running a newer kernel.
91
91
92
+
## Optimizing cross-region transfer speeds in Azure Linux VMs
93
+
94
+
Azure Linux VMs often experience network performance issues, particularly when transferring large files (1GB to 50GB) between regions, such as West Europe and West US. These issues are caused by generic kernel configurations, network buffer settings, and default congestion control algorithms, which result in delayed packets, limited throughput, and inefficient resource usage.
95
+
96
+
To enhance network performance, consider implementing the following optimizations that have been proven effective in a number of situations on Azure:
97
+
98
+
-**Network buffer settings**: Adjust kernel parameters to maximize read and write memory buffers. Add these configurations to `/etc/sysctl.d/99-azure-network-buffers.conf`:
99
+
100
+
```plaintext
101
+
net.core.rmem_max = 2147483647
102
+
net.core.wmem_max = 2147483647
103
+
net.ipv4.tcp_rmem = 4096 67108864 1073741824
104
+
net.ipv4.tcp_wmem = 4096 67108864 1073741824
105
+
```
106
+
107
+
-**Congestion control**: Enabling BBR congestion control can often result in better throughput. Add this configuration to `/etc/sysctl.d/99-azure-congestion-control.conf`:
108
+
109
+
- Ensure the BBR module is loaded by adding it to `/etc/modules-load.d/99-azure-tcp-bbr.conf`:
110
+
111
+
```plaintext
112
+
tcp_bbr
113
+
```
114
+
115
+
```plaintext
116
+
net.ipv4.tcp_congestion_control = bbr
117
+
```
118
+
119
+
-**Queue discipline (qdisc)**: Packet processing in Azure is generally improved by setting the default qdisc to `fq`. Add this configuration to `/etc/sysctl.d/99-azure-qdisc.conf`:
120
+
121
+
```plaintext
122
+
net.core.default_qdisc = fq
123
+
```
124
+
125
+
- Create a udev rule in `/etc/udev/rules.d/99-azure-qdisc.rules` to ensure the qdisc is applied to network interfaces:
126
+
127
+
```plaintext
128
+
ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="enP*", PROGRAM="/sbin/tc qdisc replace dev \$env{INTERFACE} root noqueue"
129
+
ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="eth*", PROGRAM="/sbin/tc qdisc replace dev \$env{INTERFACE} root fq“
130
+
```
131
+
132
+
-**IRQ scheduling**: Depending on your workload, you may wish to restrict the irqbalance service from scheduling IRQs on certain nodes. Update `/etc/default/irqbalance` to specify which CPUs should not have IRQs scheduled:
133
+
134
+
```bash
135
+
IRQBALANCE_BANNED_CPULIST=0-2
136
+
```
137
+
138
+
-**udev rules**: Add rules to optimize queue length and manage device flags efficiently. Create the following rule in `/etc/udev/rules.d/99-azure-txqueue-len.rules`:
When it comes to Linux performance networking we use SR-IOV with Mellanox drivers (mlx4 or mlx5), something specific to Azure is that this creates two interfaces a synthetic and a virtual interface. [Learn More](/azure/virtual-network/accelerated-networking-how-it-works).
147
+
148
+
149
+
### Additional Notes
150
+
151
+
System administrators can implement these solutions by editing configuration files such as `/etc/sysctl.d/`, `/etc/modules-load.d/`, and `/etc/udev/rules.d/`. Ensure that kernel driver updates and systemd configurations are reviewed for potential regressions.
152
+
153
+
For further details on specific configurations and troubleshooting, refer to Azure documentation on networking performance.
154
+
92
155
## Related content
93
156
94
157
- Deploy VMs close to each other for low latency with [proximity placement groups](/azure/virtual-machines/co-location).
95
158
- See the optimized result with [Bandwidth/Throughput testing](virtual-network-bandwidth-testing.md) for your scenario.
96
159
- Read about how [bandwidth is allocated to virtual machines](virtual-machine-network-throughput.md).
97
-
- Learn more with [Azure Virtual Network frequently asked questions](virtual-networks-faq.md).
160
+
- Learn more with [Azure Virtual Network frequently asked questions](virtual-networks-faq.md).
0 commit comments