Skip to content

Commit e075579

Browse files
Merge pull request #234920 from asudbring/linux-fixes
Linux doc-athon fixes for accelerated networking article
2 parents 89bef8a + 5c5b3fb commit e075579

File tree

1 file changed

+18
-21
lines changed

1 file changed

+18
-21
lines changed

articles/virtual-network/accelerated-networking-how-it-works.md

Lines changed: 18 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,12 @@
11
---
22
title: How Accelerated Networking works in Linux and FreeBSD VMs
33
description: How Accelerated Networking Works in Linux and FreeBSD VMs
4-
services: virtual-network
54
author: asudbring
6-
manager: gedegrac
75
ms.service: virtual-network
8-
ms.devlang: na
96
ms.topic: how-to
107
ms.tgt_pltfrm: vm-linux
118
ms.workload: infrastructure
12-
ms.date: 02/15/2022
9+
ms.date: 04/18/2023
1310
ms.author: allensu
1411
---
1512

@@ -19,9 +16,9 @@ When a VM is created in Azure, a synthetic network interface is created for each
1916

2017
If the VM is configured with Accelerated Networking, a second network interface is created for each virtual NIC that is configured. The second interface is an SR-IOV Virtual Function (VF) offered by the physical network NIC in the Azure host. The VF interface shows up in the Linux guest as a PCI device, and uses the Mellanox “mlx4” or “mlx5” driver in Linux, since Azure hosts use physical NICs from Mellanox. Most network packets go directly between the Linux guest and the physical NIC without traversing the virtual switch or any other software that runs on the host. Because of the direct access to the hardware, network latency is lower and less CPU time is used to process network packets when compared with the synthetic interface.
2118

22-
Different Azure hosts use different models of Mellanox physical NIC, so Linux automatically determines whether to use the “mlx4” or “mlx5” driver. Placement of the VM on an Azure host is controlled by the Azure infrastructure. With no customer option to specify which physical NIC that a VM deployment uses, the VMs must include both drivers. If a VM is stopped/deallocated and then restarted, it might be redeployed on hardware with a different model of Mellanox physical NIC. Therefore, it might use the other Mellanox driver.
19+
Different Azure hosts use different models of Mellanox physical NIC. Linux automatically determines whether to use the “mlx4” or “mlx5” driver. The Azure infrastructure controls the placement of the VM on the Azure host. With no customer option to specify which physical NIC that a VM deployment uses, the VMs must include both drivers. If a VM is stopped/deallocated and then restarted, it might be redeployed on hardware with a different model of Mellanox physical NIC. Therefore, it might use the other Mellanox driver.
2320

24-
If a VM image doesn't include a driver for the Mellanox physical NIC, networking capabilities will continue to work at the slower speeds of the virtual NIC, even though the portal, Azure CLI, and Azure PowerShell will still show the Accelerated Networking feature as _enabled_.
21+
If a VM image doesn't include a driver for the Mellanox physical NIC, networking capabilities continue to work at the slower speeds of the virtual NIC. The portal, Azure CLI, and Azure PowerShell display the Accelerated Networking feature as _enabled_.
2522

2623
FreeBSD provides the same support for Accelerated Networking as Linux when running in Azure. The remainder of this article describes Linux and uses Linux examples, but the same functionality is available in FreeBSD.
2724

@@ -30,11 +27,11 @@ FreeBSD provides the same support for Accelerated Networking as Linux when runni
3027
3128
## Bonding
3229

33-
The synthetic network interface and VF interface are automatically paired and act as a single interface in most aspects that are seen by applications. The bonding is done by the netvsc driver. Depending on the Linux distro, udev rules and scripts might help in naming the VF interface and in network configuration. If the VM is configured with multiple virtual NICs, the Azure host provides a unique serial number for each one. It's used to allow Linux to do the proper pairing of synthetic and VF interfaces for each virtual NIC.
30+
The synthetic network interface and VF interface are automatically paired and act as a single interface in most aspects used by applications. The bonding is done by the netvsc driver. Depending on the Linux distro, udev rules and scripts might help in naming the VF interface and in network configuration. If the VM is configured with multiple virtual NICs, the Azure host provides a unique serial number for each one. It's used to allow Linux to do the proper pairing of synthetic and VF interfaces for each virtual NIC.
3431

3532
The synthetic and VF interfaces both have the same MAC address. Together they constitute a single NIC from the standpoint of other network entities that exchange packets with the virtual NIC in the VM. Other entities don't take any special action because of the existence of both the synthetic interface and the VF interface.
3633

37-
Both interfaces are visible via the ifconfig or ip addr command in Linux. Here's example ifconfig output in Ubuntu 18.04:
34+
Both interfaces are visible via the `ifconfig` or `ip addr` command in Linux. Here's an example `ifconfig` output:
3835

3936
```output
4037
U1804:~$ ifconfig
@@ -55,21 +52,21 @@ TX packets 9103233 bytes 2183731687 (2.1 GB)
5552
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
5653
```
5754

58-
The synthetic interface always has a name of the form eth\<n\>. Depending on the Linux distro, the VF interface might have a name of the form eth\<n\>, or a name of a different form because of a udev rule that does renaming.
55+
The synthetic interface always has a name of the form `eth\<n\>`. Depending on the Linux distro, the VF interface might have a name of the form `eth\<n\>`, or a name of a different form because of a `udev` rule that does renaming.
5956

6057
Whether a particular interface is the synthetic interface or the VF interface can be determined with the shell command line that shows the device driver used by the interface:
6158

6259
```output
6360
$ ethtool -i <interface name> | grep driver
6461
```
6562

66-
If the driver is hv_netvsc, it's the synthetic interface. The VF interface has a driver name that contains “mlx”. The VF interface is also identifiable because its flags field includes SLAVE.” This flag indicates that it's under the control of the synthetic interface that has the same MAC address. Finally, IP addresses are assigned only to the synthetic interface, and the output of ifconfig or ip addr shows this distinction as well.
63+
If the driver is `hv_netvsc`, it's the synthetic interface. The VF interface has a driver name that contains “mlx”. The VF interface is also identifiable because its flags field includes `SLAVE`. This flag indicates that it's under the control of the synthetic interface that has the same MAC address. Finally, IP addresses are assigned only to the synthetic interface, and the output of `ifconfig` or `ip addr` shows this distinction as well.
6764

6865
## Application Usage
6966

70-
Applications should interact only with the synthetic interface, just like in any other networking environment. Outgoing network packets are passed from the netvsc driver to the VF driver and then transmitted through the VF interface. Incoming packets are received and processed on the VF interface before being passed to the synthetic interface. Exceptions are incoming TCP SYN packets and broadcast/multicast packets that are processed by the synthetic interface only.
67+
Applications should interact only with the synthetic interface, just like in any other networking environment. Outgoing network packets are passed from the netvsc driver to the VF driver and then transmitted through the VF interface. Incoming packets are received and processed on the VF interface before being passed to the synthetic interface. Exceptions are incoming TCP SYN packets and broadcast/multicast packets processed by the synthetic interface only.
7168

72-
You can verify that packets are flowing over the VF interface from the output of ethtool -S eth\<n\>. The output lines that contain “vf” show the traffic over the VF interface. For example:
69+
You can verify that packets are flowing over the VF interface from the output of `ethtool -S eth\<n\>`. The output lines that contain `vf` show the traffic over the VF interface. For example:
7370

7471
```output
7572
U1804:~# ethtool -S eth0 | grep ' vf_'
@@ -82,7 +79,7 @@ U1804:~# ethtool -S eth0 | grep ' vf_'
8279

8380
If these counters are incrementing on successive execution of the “ethtool” command, then network traffic is flowing over the VF interface.
8481

85-
The existence of the VF interface as a PCI device can be seen with the lspci command. For example, on the Generation 1 VM, you might see output similar to this (Generation 2 VMs don’t have the legacy PCI devices):
82+
The existence of the VF interface as a PCI device can be seen with the `lspci` command. For example, on the Generation 1 VM, you might see output similar to the following output (Generation 2 VMs don’t have the legacy PCI devices):
8683

8784
```output
8885
U1804:~# lspci
@@ -151,7 +148,7 @@ The corresponding synthetic interface that is using the netvsc driver has detect
151148
[ 7.480651] mlx5_core cf63:00:02.0 enP53091s1np0: renamed from eth1
152149
```
153150

154-
The VF interface initially was named “eth1” by the Linux kernel. A udev rule renamed it to avoid confusion with the names given to the synthetic interfaces.
151+
The VF interface initially was named “eth1” by the Linux kernel. An udev rule renamed it to avoid confusion with the names given to the synthetic interfaces.
155152

156153
```output
157154
[ 8.087962] mlx5_core cf63:00:02.0 enP53091s1np0: Link up
@@ -180,9 +177,9 @@ The final message indicates that the data path has switched to using the VF inte
180177

181178
## Azure Host Servicing
182179

183-
When Azure host servicing is performed, all VF interfaces might be temporarily removed from the VM during the servicing. When the servicing is complete, the VF interfaces are added back to the VM and normal operation continues. While the VM is operating without the VF interfaces, network traffic continues to flow through the synthetic interface without any disruption to applications. In this context, Azure host servicing might include updating the various components of the Azure network infrastructure or a full upgrade of the Azure host hypervisor software. Such servicing events occur at time intervals depending on the operational needs of the Azure infrastructure. These events typically can be expected several times over the course of a year. If applications interact only with the synthetic interface, the automatic switching between the VF interface and the synthetic interface ensures that workloads aren't disturbed by such servicing events. Latencies and CPU load might be higher during the periods because of the use of the synthetic interface. The duration of such periods is typically on the order of 30 seconds, but sometimes might be as long as a few minutes.
180+
When Azure host servicing is performed, all VF interfaces might be temporarily removed from the VM during the servicing. When the servicing is complete, the VF interfaces are added back to the VM. Normal operation continues. While the VM is operating without the VF interfaces, network traffic continues to flow through the synthetic interface without any disruption to applications. In this context, Azure host servicing might include updating the various components of the Azure network infrastructure or a full upgrade of the Azure host hypervisor software. Such servicing events occur at time intervals depending on the operational needs of the Azure infrastructure. These events typically can be expected several times over the course of a year. The automatic switching between the VF interface and the synthetic interface ensures that servicing events don't disturb workloads if applications interact only with the synthetic interface. Latencies and CPU load might be higher during the periods because of the use of the synthetic interface. The duration of such periods is typically on the order of 30 seconds, but sometimes might be as long as a few minutes.
184181

185-
The removal and re-add of the VF interface during a servicing event is visible in the “dmesg” output in the VM. Here's typical output:
182+
The removal and readd of the VF interface during a servicing event is visible in the “dmesg” output in the VM. Here's typical output:
186183

187184
```output
188185
[ 8160.911509] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched from VF: enP53091s1np0
@@ -204,7 +201,7 @@ The data path has been switched away from the VF interface, and the VF interface
204201
[ 8225.667978] pci cf63:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref]
205202
```
206203

207-
When the VF interface is re-added after servicing is complete, a new PCI device with the specified GUID is detected. It's assigned the same PCI domain ID (0xcf63) as before. The handling of the re-add VF interface is like during the initial boot.
204+
When the VF interface is readded after servicing is complete, a new PCI device with the specified GUID is detected. It's assigned the same PCI domain ID (0xcf63) as before. The handling of the readd VF interface is like during the initial boot.
208205

209206
```output
210207
[ 8225.679672] mlx5_core cf63:00:02.0: firmware version: 14.25.8362
@@ -225,17 +222,17 @@ The mlx5 driver initializes the VF interface, and the interface is now functiona
225222

226223
The data path has been switched back to the VF interface.
227224

228-
## Disable/Enable Accelerated Networking in a non-running VM
225+
## Disable/Enable Accelerated Networking in a nonrunning VM
229226

230-
Accelerated Networking can be toggled on a virtual NIC in a non-running VM with Azure CLI. For example:
227+
Accelerated Networking can be toggled on a virtual NIC in a nonrunning VM with Azure CLI. For example:
231228

232229
```output
233230
$ az network nic update --name u1804895 --resource-group testrg --accelerated-network false
234231
```
235232

236-
Disabling Accelerated Networking that is enabled in the guest VM produces a “dmesg” output. It's the same as when the VF interface is removed for Azure host servicing. Enabling Accelerated Networking produces the same “dmesg” output as when the VF interface is readded after Azure host servicing. These Azure CLI commands can be used to simulate Azure host servicing. With them you can verify that your applications do not incorrectly depend on direct interaction with the VF interface.
233+
Disabling Accelerated Networking that is enabled in the guest VM produces a “dmesg” output. It's the same as when the VF interface is removed for Azure host servicing. Enabling Accelerated Networking produces the same “dmesg” output as when the VF interface is readded after Azure host servicing. These Azure CLI commands can be used to simulate Azure host servicing. With them, you can verify that your applications don't incorrectly depend on direct interaction with the VF interface.
237234

238235
## Next steps
239236
* Learn how to [create a VM with Accelerated Networking in PowerShell](../virtual-network/create-vm-accelerated-networking-powershell.md)
240-
* Learn how to [create a VM with Accerelated Networking using Azure CLI](../virtual-network/create-vm-accelerated-networking-cli.md)
237+
* Learn how to [create a VM with Accelerated Networking using Azure CLI](../virtual-network/create-vm-accelerated-networking-cli.md)
241238
* Improve latency with an [Azure proximity placement group](../virtual-machines/co-location.md)

0 commit comments

Comments
 (0)