You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/virtual-network/virtual-network-optimize-network-bandwidth.md
+28-28Lines changed: 28 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -89,64 +89,64 @@ sudo reboot
89
89
90
90
Most modern distributions should have significant improvements with kernels newer than 4.19+. Check the current kernel version to make sure that you're running a newer kernel.
91
91
92
-
###Optimizing network performance in Linux VMs on Azure
92
+
## Optimizing cross-region transfer speeds in Azure Linux VMs
93
93
94
94
Azure Linux VMs often experience network performance issues, particularly when transferring large files (1GB to 50GB) between regions, such as West Europe and West US. These issues are caused by suboptimal kernel configurations, network buffer settings, and default congestion control algorithms, which result in delayed packets, limited throughput, and inefficient resource usage.
95
95
96
96
To enhance network performance, consider implementing the following optimizations that have been proven effective in a number of situations on Azure:
97
97
98
98
-**Network buffer settings**: Adjust kernel parameters to maximize read and write memory buffers. Add these configurations to `/etc/sysctl.d/99-azure-network-buffers.conf`:
99
99
100
-
```plaintext
101
-
net.core.rmem_max = 2147483647
102
-
net.core.wmem_max = 2147483647
103
-
net.ipv4.tcp_rmem = 4096 67108864 1073741824
104
-
net.ipv4.tcp_wmem = 4096 67108864 1073741824
105
-
```
100
+
```plaintext
101
+
net.core.rmem_max = 2147483647
102
+
net.core.wmem_max = 2147483647
103
+
net.ipv4.tcp_rmem = 4096 67108864 1073741824
104
+
net.ipv4.tcp_wmem = 4096 67108864 1073741824
105
+
```
106
106
107
107
-**Congestion control**: Enabling BBR congestion control can often result in better throughput. Add this configuration to `/etc/sysctl.d/99-azure-congestion-control.conf`:
108
108
109
109
- Ensure the BBR module is loaded by adding it to `/etc/modules-load.d/99-azure-tcp-bbr.conf`:
110
110
111
-
```plaintext
112
-
tcp_bbr
113
-
```
111
+
```plaintext
112
+
tcp_bbr
113
+
```
114
114
115
-
```plaintext
116
-
net.ipv4.tcp_congestion_control = bbr
117
-
```
115
+
```plaintext
116
+
net.ipv4.tcp_congestion_control = bbr
117
+
```
118
118
119
119
-**Queue discipline (qdisc)**: Packet processing in Azure is generally improved by setting the default qdisc to `fq`. Add this configuration to `/etc/sysctl.d/99-azure-qdisc.conf`:
120
120
121
-
```plaintext
122
-
net.core.default_qdisc = fq
123
-
```
121
+
```plaintext
122
+
net.core.default_qdisc = fq
123
+
```
124
124
125
125
- Create a udev rule in `/etc/udev/rules.d/99-azure-qdisc.rules` to ensure the qdisc is applied to network interfaces:
126
126
127
-
```plaintext
128
-
ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="enP*", PROGRAM="/sbin/tc qdisc replace dev \$env{INTERFACE} root noqueue"
129
-
ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="eth*", PROGRAM="/sbin/tc qdisc replace dev \$env{INTERFACE} root fq“
130
-
```
127
+
```plaintext
128
+
ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="enP*", PROGRAM="/sbin/tc qdisc replace dev \$env{INTERFACE} root noqueue"
129
+
ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="eth*", PROGRAM="/sbin/tc qdisc replace dev \$env{INTERFACE} root fq“
130
+
```
131
131
132
132
-**IRQ scheduling**: Depending on your workload, you may wish to restrict the irqbalance service from scheduling IRQs on certain nodes. Update `/etc/default/irqbalance` with the following configuration:
133
133
134
-
```bash
135
-
IRQBALANCE_BANNED_CPULIST=0-2
136
-
```
134
+
```bash
135
+
IRQBALANCE_BANNED_CPULIST=0-2
136
+
```
137
137
138
138
-**udev rules**: Add rules to optimize queue length and manage device flags efficiently. Create the following rule in `/etc/udev/rules.d/99-azure-txqueue-len.rules`:
When it comes to Linux performance networking we use SR-IOV with Mellanox drivers (mlx4 or mlx5), something specific to Azure is that this creates two interfaces a synthetic and a virtual interface. [Learn More](/azure/virtual-network/accelerated-networking-how-it-works).
147
147
148
148
149
-
####Additional Notes
149
+
### Additional Notes
150
150
151
151
System administrators can implement these solutions by editing configuration files such as `/etc/sysctl.d/`, `/etc/modules-load.d/`, and `/etc/udev/rules.d/`. Ensure that kernel driver updates and systemd configurations are reviewed for potential regressions.
0 commit comments