You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/virtual-network/virtual-network-optimize-network-bandwidth.md
+76-37Lines changed: 76 additions & 37 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ Azure virtual machines (VMs) have default network settings that can be further o
19
19
20
20
If your Windows VM supports *accelerated networking*, enable that feature for optimal throughput. For more information, see [Create a Windows VM with accelerated networking](create-vm-accelerated-networking-powershell.md).
21
21
22
-
For all other Windows VMs, using Receive Side Scaling (RSS) can reach higher maximal throughput than a VM without RSS. RSS might be disabled by default in a Windows VM. To determine whether RSS is enabled, and enable it if it's currently disabled, follow these steps:
22
+
For all other Windows VMs, using Receive Side Scaling (RSS) can reach higher maximal throughput than a VM without RSS. RSS might be disabled by default in a Windows VM. To check if RSS is enabled and enable it, follow these steps:
23
23
24
24
1. See if RSS is enabled for a network adapter with the [Get-NetAdapterRss](/powershell/module/netadapter/get-netadapterrss) PowerShell command. In the following example, output returned from the `Get-NetAdapterRss` RSS isn't enabled.
25
25
@@ -53,7 +53,7 @@ RSS is always enabled by default in an Azure Linux VM. Linux kernels released si
53
53
54
54
The Ubuntu on Azure kernel is heavily optimized for excellent network performance on Azure. Currently, all Ubuntu images by Canonical come by default with the optimized Azure kernel installed.
55
55
56
-
Use the following command to make sure that you're using the Azure kernel, which is identified by`-azure` at the end of the version.
56
+
Use the following command to make sure that you're using the Azure kernel, which has`-azure` at the end of the version.
57
57
58
58
```bash
59
59
uname -r
@@ -91,55 +91,94 @@ Most modern distributions should have significant improvements with kernels newe
91
91
92
92
## Optimizing cross-region transfer speeds in Azure Linux VMs
93
93
94
-
Azure Linux VMs often experience network performance issues, particularly when transferring large files (1GB to 50GB) between regions, such as West Europe and West US. These issues are caused by generic kernel configurations, network buffer settings, and default congestion control algorithms, which result in delayed packets, limited throughput, and inefficient resource usage.
94
+
Azure Linux VMs often experience network performance issues, particularly when transferring large files (1 GB to 50 GB) between regions, such as West Europe and West US. These issues are caused by generic kernel configurations, network buffer settings, and default congestion control algorithms, which result in delayed packets, limited throughput, and inefficient resource usage.
95
95
96
-
To enhance network performance, consider implementing the following optimizations that have been proven effective in a number of situations on Azure:
96
+
To enhance network performance, consider implementing the following optimizations that are proven effective in many situations on Azure:
97
97
98
98
-**Network buffer settings**: Adjust kernel parameters to maximize read and write memory buffers. Add these configurations to `/etc/sysctl.d/99-azure-network-buffers.conf`:
99
99
100
-
```plaintext
101
-
net.core.rmem_max = 2147483647
102
-
net.core.wmem_max = 2147483647
103
-
net.ipv4.tcp_rmem = 4096 67108864 1073741824
104
-
net.ipv4.tcp_wmem = 4096 67108864 1073741824
105
-
```
100
+
```plaintext
101
+
net.ipv4.tcp_mem = 4096 87380 67108864
102
+
net.ipv4.udp_mem = 4096 87380 33554432
103
+
net.ipv4.tcp_rmem = 4096 87380 67108864
104
+
net.ipv4.tcp_wmem = 4096 65536 67108864
105
+
net.core.rmem_default = 33554432
106
+
net.core.wmem_default = 33554432
107
+
net.ipv4.udp_wmem_min = 16384
108
+
net.ipv4.udp_rmem_min = 16384
109
+
net.core.wmem_max = 134217728
110
+
net.core.rmem_max = 134217728
111
+
net.core.busy_poll = 50
112
+
net.core.busy_read = 50
113
+
```
106
114
107
-
-**Congestion control**: Enabling BBR congestion control can often result in better throughput. Add this configuration to `/etc/sysctl.d/99-azure-congestion-control.conf`:
115
+
-**Congestion control for kernels 4.19+**: Enabling Bottleneck Bandwidth and Round-trip propagation time (BBR) congestion control can often result in better throughput. Add this configuration to `/etc/sysctl.d/99-azure-congestion-control.conf`:
116
+
117
+
```plaintext
118
+
net.ipv4.tcp_congestion_control = bbr
119
+
```
120
+
-**Extra TCP parameters that will usually help with better consistency, throughput**: Add these configurations to `/etc/sysctl.d/99-azure-network-extras.conf`:
121
+
122
+
````plaintext
123
+
# For deployments where the Linux VM is BEHIND an Azure Load Balancer, timestamps MUST be set to 0
124
+
net.ipv4.tcp_timestamps = 1
125
+
126
+
# Reuse does require tcp_timestamps to be enabled. If tcp_timestamps are disabled because of load balancers, you should set reuse to 2.
127
+
net.ipv4.tcp_tw_reuse = 1
128
+
129
+
# Allowed local port range. This will increase the number of locally available ports (source ports)
130
+
net.ipv4.ip_local_port_range = 1024 65535
131
+
132
+
# Maximum number of packets taken from all interfaces in one polling cycle (NAPI poll). In one polling cycle interfaces which are # registered to polling are probed in a round-robin manner.
133
+
net.core.netdev_budget = 1000
108
134
109
-
- Ensure the BBR module is loaded by adding it to `/etc/modules-load.d/99-azure-tcp-bbr.conf`:
135
+
# For high-performance environments, it's recommended to increase from the default 20KB to 65KB, in some extreme cases, for environments that support 100G+ networking, you can
136
+
# increase it to 1048576
137
+
net.core.optmem_max = 65535
110
138
111
-
```plaintext
112
-
tcp_bbr
113
-
```
139
+
# F-RTO is not recommended on wired networks.
140
+
net.ipv4.tcp_frto = 0
114
141
115
-
```plaintext
116
-
net.ipv4.tcp_congestion_control = bbr
117
-
```
142
+
# Increase the number of incoming connections / number of connections backlog
143
+
net.core.somaxconn = 32768
144
+
net.core.netdev_max_backlog = 32768
145
+
net.core.dev_weight = 64
146
+
````
118
147
119
-
-**Queue discipline (qdisc)**: Packet processing in Azure is generally improved by setting the default qdisc to `fq`. Add this configuration to `/etc/sysctl.d/99-azure-qdisc.conf`:
148
+
-**Queue discipline (qdisc)**: Packet processing in Azure is improved by setting the default qdisc to `fq`. Add this configuration to `/etc/sysctl.d/99-azure-qdisc.conf`:
120
149
121
-
```plaintext
122
-
net.core.default_qdisc = fq
123
-
```
150
+
```plaintext
151
+
net.core.default_qdisc = fq
152
+
```
124
153
125
-
- Create a udev rule in `/etc/udev/rules.d/99-azure-qdisc.rules` to ensure the qdisc is applied to network interfaces:
154
+
-**Optimize NIC ring buffers for TX/RX**: Create an udev rule in `/etc/udev/rules.d/99-azure-ring-buffer.rules` to ensure they're applied to network interfaces:
126
155
127
-
```plaintext
128
-
ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="enP*", PROGRAM="/sbin/tc qdisc replace dev \$env{INTERFACE} root noqueue"
129
-
ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="eth*", PROGRAM="/sbin/tc qdisc replace dev \$env{INTERFACE} root fq“
130
-
```
156
+
````plaintext
157
+
# Setup Accelerated Interface ring buffers (Mellanox / Mana)
-**IRQ scheduling**: Depending on your workload, you may wish to restrict the irqbalance service from scheduling IRQs on certain nodes. Update `/etc/default/irqbalance` to specify which CPUs should not have IRQs scheduled:
160
+
# Setup Synthetic interface ring buffers (hv_netvsc)
- Create an udev rule in `/etc/udev/rules.d/99-azure-qdisc.rules` to ensure the qdisc is applied to network interfaces:
133
165
134
-
```bash
135
-
IRQBALANCE_BANNED_CPULIST=0-2
136
-
```
166
+
```plaintext
167
+
ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="enP*", PROGRAM="/sbin/tc qdisc replace dev \$env{INTERFACE} root noqueue"
168
+
ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="eth*", PROGRAM="/sbin/tc qdisc replace dev \$env{INTERFACE} root fq“
169
+
```
137
170
138
-
-**udev rules**: Add rules to optimize queue length and manage device flags efficiently. Create the following rule in `/etc/udev/rules.d/99-azure-txqueue-len.rules`:
171
+
-**Interrupt Request (IRQ) scheduling**: Depending on your workload, you may wish to restrict the irqbalance service from scheduling IRQs on certain nodes. Update `/etc/default/irqbalance` to specify which CPUs shouldn't have IRQs scheduled:
-**UDEV rules**: Add rules to optimize queue length and manage device flags efficiently. Create the following rule in `/etc/udev/rules.d/99-azure-txqueue-len.rules`:
@@ -150,11 +189,11 @@ When it comes to Linux performance networking we use SR-IOV with Mellanox driver
150
189
151
190
System administrators can implement these solutions by editing configuration files such as `/etc/sysctl.d/`, `/etc/modules-load.d/`, and `/etc/udev/rules.d/`. Ensure that kernel driver updates and systemd configurations are reviewed for potential regressions.
152
191
153
-
For further details on specific configurations and troubleshooting, refer to Azure documentation on networking performance.
192
+
For more information on specific configurations and troubleshooting, refer to Azure documentation on networking performance.
154
193
155
194
## Related content
156
195
157
196
- Deploy VMs close to each other for low latency with [proximity placement groups](/azure/virtual-machines/co-location).
158
197
- See the optimized result with [Bandwidth/Throughput testing](virtual-network-bandwidth-testing.md) for your scenario.
159
198
- Read about how [bandwidth is allocated to virtual machines](virtual-machine-network-throughput.md).
160
-
- Learn more with [Azure Virtual Network frequently asked questions](virtual-networks-faq.md).
199
+
- Learn more with [Azure Virtual Network frequently asked questions](virtual-networks-faq.md).
0 commit comments