You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/virtual-network/virtual-network-optimize-network-bandwidth.md
+67-28Lines changed: 67 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -97,49 +97,88 @@ To enhance network performance, consider implementing the following optimization
97
97
98
98
-**Network buffer settings**: Adjust kernel parameters to maximize read and write memory buffers. Add these configurations to `/etc/sysctl.d/99-azure-network-buffers.conf`:
99
99
100
-
```plaintext
101
-
net.core.rmem_max = 2147483647
102
-
net.core.wmem_max = 2147483647
103
-
net.ipv4.tcp_rmem = 4096 67108864 1073741824
104
-
net.ipv4.tcp_wmem = 4096 67108864 1073741824
105
-
```
100
+
```plaintext
101
+
net.ipv4.tcp_mem = 4096 87380 67108864
102
+
net.ipv4.udp_mem = 4096 87380 33554432
103
+
net.ipv4.tcp_rmem = 4096 87380 67108864
104
+
net.ipv4.tcp_wmem = 4096 65536 67108864
105
+
net.core.rmem_default = 33554432
106
+
net.core.wmem_default = 33554432
107
+
net.ipv4.udp_wmem_min = 16384
108
+
net.ipv4.udp_rmem_min = 16384
109
+
net.core.wmem_max = 134217728
110
+
net.core.rmem_max = 134217728
111
+
net.core.busy_poll = 50
112
+
net.core.busy_read = 50
113
+
```
106
114
107
-
-**Congestion control**: Enabling BBR congestion control can often result in better throughput. Add this configuration to `/etc/sysctl.d/99-azure-congestion-control.conf`:
115
+
-**Congestion control for kernels 4.19+**: Enabling BBR congestion control can often result in better throughput. Add this configuration to `/etc/sysctl.d/99-azure-congestion-control.conf`:
116
+
117
+
```plaintext
118
+
net.ipv4.tcp_congestion_control = bbr
119
+
```
120
+
-**Extra TCP parameters that will usually help with better consistency, throughput**: Add these configurations to `/etc/sysctl.d/99-azure-network-extras.conf`:
121
+
122
+
````plaintext
123
+
# For deployments where the Linux VM is BEHIND an Azure Load Balancer, timestamps MUST be set to 0
124
+
net.ipv4.tcp_timestamps = 1
125
+
126
+
# Reuse does require tcp_timestamps to be enabled. If tcp_timestamps are disabled because of load balancers, you should set reuse to 2.
127
+
net.ipv4.tcp_tw_reuse = 1
128
+
129
+
# Allowed local port range. This will increase the number of locally available ports (source ports)
130
+
net.ipv4.ip_local_port_range = 1024 65535
131
+
132
+
# Maximum number of packets taken from all interfaces in one polling cycle (NAPI poll). In one polling cycle interfaces which are # registered to polling are probed in a round-robin manner.
133
+
net.core.netdev_budget = 1000
108
134
109
-
- Ensure the BBR module is loaded by adding it to `/etc/modules-load.d/99-azure-tcp-bbr.conf`:
135
+
# For high-performance environments, it's recommended to increase from the default 20KB to 65KB, in some extreme cases, for environments that support 100G+ networking, you can
136
+
# increase it to 1048576
137
+
net.core.optmem_max = 65535
110
138
111
-
```plaintext
112
-
tcp_bbr
113
-
```
139
+
# F-RTO is not recommended on wired networks.
140
+
net.ipv4.tcp_frto = 0
114
141
115
-
```plaintext
116
-
net.ipv4.tcp_congestion_control = bbr
117
-
```
142
+
# Increase the number of incoming connections / number of connections backlog
143
+
net.core.somaxconn = 32768
144
+
net.core.netdev_max_backlog = 32768
145
+
net.core.dev_weight = 64
146
+
````
118
147
119
148
-**Queue discipline (qdisc)**: Packet processing in Azure is generally improved by setting the default qdisc to `fq`. Add this configuration to `/etc/sysctl.d/99-azure-qdisc.conf`:
120
149
121
-
```plaintext
122
-
net.core.default_qdisc = fq
123
-
```
150
+
```plaintext
151
+
net.core.default_qdisc = fq
152
+
```
153
+
154
+
-**Optimize NIC ring buffers for TX/RX**: Create a udev rule in `/etc/udev/rules.d/99-azure-ring-buffer.rules` to ensure they are applied to network interfaces:
124
155
156
+
````plaintext
157
+
# Setup Accelerated Interface ring buffers (Mellanox / Mana)
- Create a udev rule in `/etc/udev/rules.d/99-azure-qdisc.rules` to ensure the qdisc is applied to network interfaces:
126
165
127
-
```plaintext
128
-
ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="enP*", PROGRAM="/sbin/tc qdisc replace dev \$env{INTERFACE} root noqueue"
129
-
ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="eth*", PROGRAM="/sbin/tc qdisc replace dev \$env{INTERFACE} root fq“
130
-
```
166
+
```plaintext
167
+
ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="enP*", PROGRAM="/sbin/tc qdisc replace dev \$env{INTERFACE} root noqueue"
168
+
ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="eth*", PROGRAM="/sbin/tc qdisc replace dev \$env{INTERFACE} root fq“
169
+
```
131
170
132
171
-**IRQ scheduling**: Depending on your workload, you may wish to restrict the irqbalance service from scheduling IRQs on certain nodes. Update `/etc/default/irqbalance` to specify which CPUs should not have IRQs scheduled:
133
172
134
-
```bash
135
-
IRQBALANCE_BANNED_CPULIST=0-2
136
-
```
173
+
```bash
174
+
IRQBALANCE_BANNED_CPULIST=0-2
175
+
```
137
176
138
177
-**udev rules**: Add rules to optimize queue length and manage device flags efficiently. Create the following rule in `/etc/udev/rules.d/99-azure-txqueue-len.rules`:
0 commit comments