You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/virtual-network/setup-dpdk.md
+56-37Lines changed: 56 additions & 37 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,21 +1,20 @@
1
1
---
2
2
title: DPDK in an Azure Linux VM
3
+
titleSuffix: Azure Virtual Network
3
4
description: Learn the benefits of the Data Plane Development Kit (DPDK) and how to set up the DPDK on a Linux virtual machine.
4
5
services: virtual-network
5
6
author: asudbring
6
-
manager: gedegrac
7
7
ms.service: virtual-network
8
8
ms.topic: how-to
9
-
ms.workload: infrastructure-services
10
-
ms.date: 05/12/2020
9
+
ms.date: 04/24/2023
11
10
ms.author: allensu
12
11
---
13
12
14
13
# Set up DPDK in a Linux virtual machine
15
14
16
15
Data Plane Development Kit (DPDK) on Azure offers a faster user-space packet processing framework for performance-intensive applications. This framework bypasses the virtual machine’s kernel network stack.
17
16
18
-
In typical packet processing that uses the kernel network stack, the process is interrupt-driven. When the network interface receives incoming packets, there is a kernel interrupt to process the packet and a context switch from the kernel space to the user space. DPDK eliminates context switching and the interrupt-driven method in favor of a user-space implementation that uses poll mode drivers for fast packet processing.
17
+
In typical packet processing that uses the kernel network stack, the process is interrupt-driven. When the network interface receives incoming packets, there's a kernel interrupt to process the packet and a context switch from the kernel space to the user space. DPDK eliminates context switching and the interrupt-driven method in favor of a user-space implementation that uses poll mode drivers for fast packet processing.
19
18
20
19
DPDK consists of sets of user-space libraries that provide access to lower-level resources. These resources can include hardware, logical cores, memory management, and poll mode drivers for network interface cards.
21
20
@@ -25,7 +24,6 @@ DPDK can run on Azure virtual machines that are supporting multiple operating sy
25
24
26
25
**Higher packets per second (PPS)**: Bypassing the kernel and taking control of packets in the user space reduces the cycle count by eliminating context switches. It also improves the rate of packets that are processed per second in Azure Linux virtual machines.
27
26
28
-
29
27
## Supported operating systems minimum versions
30
28
31
29
The following distributions from the Azure Marketplace are supported:
@@ -50,55 +48,47 @@ All Azure regions support DPDK.
50
48
51
49
## Prerequisites
52
50
53
-
Accelerated networking must be enabled on a Linux virtual machine. The virtual machine should have at least two network interfaces, with one interface for management. Enabling Accelerated networking on management interface is not recommended. Learn how to [create a Linux virtual machine with accelerated networking enabled](create-vm-accelerated-networking-cli.md).
51
+
Accelerated networking must be enabled on a Linux virtual machine. The virtual machine should have at least two network interfaces, with one interface for management. Enabling Accelerated networking on management interface isn't recommended. Learn how to [create a Linux virtual machine with accelerated networking enabled](create-vm-accelerated-networking-cli.md).
54
52
55
53
On virtual machines that are using InfiniBand, ensure the appropriate `mlx4_ib` or `mlx5_ib` drivers are loaded, see [Enable InfiniBand](../virtual-machines/workloads/hpc/enable-infiniband.md).
1.[Download the latest DPDK](https://core.dpdk.org/download). Version 19.11 LTS or newer is required for Azure.
143
+
134
144
2. Build the default config with `meson builddir`.
145
+
135
146
3. Compile with `ninja -C builddir`.
147
+
136
148
4. Install with `DESTDIR=<output folder> ninja -C builddir install`.
137
149
138
150
## Configure the runtime environment
@@ -141,24 +153,27 @@ After restarting, run the following commands once:
141
153
142
154
1. Hugepages
143
155
144
-
* Configure hugepage by running the following command, once for each numa node:
156
+
* Configure hugepage by running the following command, once for each numa node:
145
157
146
-
```bash
158
+
```bash
147
159
echo 1024 | sudo tee /sys/devices/system/node/node*/hugepages/hugepages-2048kB/nr_hugepages
148
-
```
160
+
```
149
161
150
-
* Create a directory for mounting with `mkdir /mnt/huge`.
151
-
* Mount hugepages with `mount -t hugetlbfs nodev /mnt/huge`.
152
-
* Check that hugepages are reserved with `grep Huge /proc/meminfo`.
162
+
* Create a directory for mounting with `mkdir /mnt/huge`.
163
+
164
+
* Mount hugepages with `mount -t hugetlbfs nodev /mnt/huge`.
165
+
166
+
* Check that hugepages are reserved with `grep Huge /proc/meminfo`.
153
167
154
-
> [NOTE]
168
+
> [!NOTE]
155
169
> There is a way to modify the grub file so that hugepages are reserved on boot by following the [instructions](https://dpdk.org/doc/guides/linux_gsg/sys_reqs.html#use-of-hugepages-in-the-linux-environment) for the DPDK. The instructions are at the bottom of the page. When you're using an Azure Linux virtual machine, modify files under **/etc/config/grub.d** instead, to reserve hugepages across reboots.
156
170
157
171
2. MAC & IP addresses: Use `ifconfig –a` to view the MAC and IP address of the network interfaces. The *VF* network interface and *NETVSC* network interface have the same MAC address, but only the *NETVSC* network interface has an IP address. *VF* interfaces are running as subordinate interfaces of *NETVSC* interfaces.
158
172
159
173
3. PCI addresses
160
174
161
175
* Use `ethtool -i <vf interface name>` to find out which PCI address to use for *VF*.
176
+
162
177
* If *eth0* has accelerated networking enabled, make sure that testpmd doesn’t accidentally take over the *VF* pci device for *eth0*. If the DPDK application accidentally takes over the management network interface and causes you to lose your SSH connection, use the serial console to stop the DPDK application. You can also use the serial console to stop or start the virtual machine.
163
178
164
179
4. Load *ibuverbs* on each reboot with `modprobe -a ib_uverbs`. For SLES 15 only, also load *mlx4_ib* with `modprobe -a mlx4_ib`.
@@ -197,6 +212,7 @@ To run testpmd in root mode, use `sudo` before the *testpmd* command.
197
212
If you're running testpmd with more than two NICs, the `--vdev` argument follows this pattern: `net_vdev_netvsc<id>,iface=<vf’s pairing eth>`.
198
213
199
214
3. After it's started, run `show port info all` to check port information. You should see one or two DPDK ports that are net_failsafe (not *net_mlx4*).
215
+
200
216
4. Use `start <port> /stop <port>` to start traffic.
201
217
202
218
The previous commands start *testpmd*in interactive mode, which is recommended for trying out testpmd commands.
@@ -238,6 +254,7 @@ The following commands periodically print the packets per second statistics:
238
254
When you're running the previous commands on a virtual machine, change *IP_SRC_ADDR* and *IP_DST_ADDR* in `app/test-pmd/txonly.c` to match the actual IP address of the virtual machines before you compile. Otherwise, the packets are dropped before reaching the receiver.
239
255
240
256
### Advanced: Single sender/single forwarder
257
+
241
258
The following commands periodically print the packets per second statistics:
242
259
243
260
1. On the TX side, run the following command:
@@ -271,10 +288,12 @@ The following commands periodically print the packets per second statistics:
271
288
--stats-period <display interval in seconds>
272
289
```
273
290
274
-
When you're running the previous commands on a virtual machine, change *IP_SRC_ADDR* and *IP_DST_ADDR*in`app/test-pmd/txonly.c` to match the actual IP address of the virtual machines before you compile. Otherwise, the packets are dropped before reaching the forwarder. You won’t be able to have a third machine receive forwarded traffic, because the *testpmd* forwarder doesn’t modify the layer-3 addresses, unless you make some code changes.
291
+
When you're running the previous commands on a virtual machine, change *IP_SRC_ADDR* and *IP_DST_ADDR*in`app/test-pmd/txonly.c` to match the actual IP address of the virtual machines before you compile. Otherwise, the packets are dropped before reaching the forwarder. You can't have a third machine receive forwarded traffic, because the *testpmd* forwarder doesn’t modify the layer-3 addresses, unless you make some code changes.
0 commit comments