Skip to content

Commit 075c59c

Browse files
authored
Merge pull request #48558 from laxmanrb/patch-28
Update setup-dpdk.md
2 parents 5e60075 + 2fbf531 commit 075c59c

File tree

1 file changed

+29
-30
lines changed

1 file changed

+29
-30
lines changed

articles/virtual-network/setup-dpdk.md

Lines changed: 29 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -104,11 +104,10 @@ zypper \
104104
## Setup virtual machine environment (once)
105105

106106
1. [Download the latest DPDK](https://core.dpdk.org/download). Version 18.02 or higher is required for Azure.
107-
2. Install the *libnuma-dev* package with `sudo apt-get install libnuma-dev`.
108-
3. First build the default config with `make config T=x86_64-native-linuxapp-gcc`.
109-
4. Enable Mellanox PMDs in the generated config with `sed -ri 's,(MLX._PMD=)n,\1y,' build/.config`.
110-
5. Compile with `make`.
111-
6. Install with `make install DESTDIR=<output folder>`.
107+
2. First build the default config with `make config T=x86_64-native-linuxapp-gcc`.
108+
3. Enable Mellanox PMDs in the generated config with `sed -ri 's,(MLX._PMD=)n,\1y,' build/.config`.
109+
4. Compile with `make`.
110+
5. Install with `make install DESTDIR=<output folder>`.
112111

113112
# Configure runtime environment
114113

@@ -130,14 +129,14 @@ Run the following commands once, after rebooting:
130129
> [!NOTE]
131130
> There is a way to modify the grub file so that huge pages are reserved on boot by following the [instructions](http://dpdk.org/doc/guides/linux_gsg/sys_reqs.html#use-of-hugepages-in-the-linux-environment) for DPDK. The instruction is at the bottom of the page. When running in an Azure Linux virtual machine, modify files under /etc/config/grub.d instead, to reserve hugepages across reboots.
132131

133-
2. MAC & IP addresses: Use `ifconfig –a` to view the MAC and IP address of the network interfaces. The *VF* network interface and *NETVSC* network interface have the same MAC address, but only the *NETVSC* network interface has an IP address.
132+
2. MAC & IP addresses: Use `ifconfig –a` to view the MAC and IP address of the network interfaces. The *VF* network interface and *NETVSC* network interface have the same MAC address, but only the *NETVSC* network interface has an IP address. VF interfaces are running as slave interfaces of NETVSC interfaces.
134133

135134
3. PCI addresses
136135

137136
* Find out which PCI address to use for *VF* with `ethtool -i <vf interface name>`.
138137
* Ensure that testpmd doesn’t accidentally take over the VF pci device for *eth0*, if *eth0* has accelerated networking enabled. If DPDK application has accidentally taken over the management network interface and causes loss of your SSH connection, use the serial console to kill DPDK application, or to stop or start the virtual machine.
139138

140-
4. Load *ibuverbs* on each reboot with `modprobe -a ib_uverbs`. For SLES 15 only, load *mlx4_ib* with 'modprobe -a mlx4_ib'.
139+
4. Load *ibuverbs* on each reboot with `modprobe -a ib_uverbs`. For SLES 15 only, also load *mlx4_ib* with `modprobe -a mlx4_ib`.
141140

142141
## Failsafe PMD
143142

@@ -149,23 +148,23 @@ Use `sudo` before the *testpmd* command to run in root mode.
149148

150149
### Basic: Sanity check, failsafe adapter initialization
151150

152-
1. Run the following commands to start a single port application:
151+
1. Run the following commands to start a single port testpmd application:
153152

154153
```bash
155154
testpmd -w <pci address from previous step> \
156155
--vdev="net_vdev_netvsc0,iface=eth1" \
157-
-i \
156+
-- -i \
158157
--port-topology=chained
159158
```
160159

161-
2. Run the following commands to start a dual port application:
160+
2. Run the following commands to start a dual port testpmd application:
162161

163162
```bash
164163
testpmd -w <pci address nic1> \
165164
-w <pci address nic2> \
166165
--vdev="net_vdev_netvsc0,iface=eth1" \
167166
--vdev="net_vdev_netvsc1,iface=eth2" \
168-
-i
167+
-- -i
169168
```
170169

171170
If running with more than 2 NICs, the `--vdev` argument follows this pattern: `net_vdev_netvsc<id>,iface=<vf’s pairing eth>`.
@@ -182,30 +181,30 @@ The following commands periodically print the packets per second statistics:
182181
1. On the TX side, run the following command:
183182

184183
```bash
185-
Testpmd \
186-
l <core-mask> \
184+
testpmd \
185+
-l <core-list> \
187186
-n <num of mem channels> \
188187
-w <pci address of the device intended to use> \
189-
--vdev=net_vdev_netvsc<id>,iface=<the iface to attach to> \
190-
--port-topology=chained \
188+
--vdev="net_vdev_netvsc<id>,iface=<the iface to attach to>" \
189+
-- --port-topology=chained \
191190
--nb-cores <number of cores to use for test pmd> \
192191
--forward-mode=txonly \
193-
eth-peer=<port id>,<peer MAC address> \
192+
--eth-peer=<port id>,<receiver peer MAC address> \
194193
--stats-period <display interval in seconds>
195194
```
196195

197196
2. On the RX side, run the following command:
198197

199198
```bash
200-
Testpmd \
201-
l <core-mask> \
199+
testpmd \
200+
-l <core-list> \
202201
-n <num of mem channels> \
203202
-w <pci address of the device intended to use> \
204203
--vdev="net_vdev_netvsc<id>,iface=<the iface to attach to>" \
205-
--port-topology=chained \
204+
-- --port-topology=chained \
206205
--nb-cores <number of cores to use for test pmd> \
207206
--forward-mode=rxonly \
208-
eth-peer=<port id>,<peer MAC address> \
207+
--eth-peer=<port id>,<sender peer MAC address> \
209208
--stats-period <display interval in seconds>
210209
```
211210

@@ -217,31 +216,31 @@ The following commands periodically print the packets per second statistics:
217216
1. On the TX side, run the following command:
218217

219218
```bash
220-
Testpmd \
221-
l <core-mask> \
219+
testpmd \
220+
-l <core-list> \
222221
-n <num of mem channels> \
223222
-w <pci address of the device intended to use> \
224223
--vdev="net_vdev_netvsc<id>,iface=<the iface to attach to>" \
225-
--port-topology=chained \
224+
-- --port-topology=chained \
226225
--nb-cores <number of cores to use for test pmd> \
227226
--forward-mode=txonly \
228-
eth-peer=<port id>,<peer MAC address> \
227+
--eth-peer=<port id>,<receiver peer MAC address> \
229228
--stats-period <display interval in seconds>
230229
```
231230

232231
2. On the FWD side, run the following command:
233232

234233
```bash
235-
Testpmd \
236-
l <core-mask> \
234+
testpmd \
235+
-l <core-list> \
237236
-n <num of mem channels> \
238237
-w <pci address NIC1> \
239238
-w <pci address NIC2> \
240-
--vdev=net_vdev_netvsc<id>,iface=<the iface to attach to> \
241-
--vdev=net_vdev_netvsc<2nd id>,iface=<2nd iface to attach to> (you need as many --vdev arguments as the number of devices used by testpmd, in this case) \
242-
 --nb-cores <number of cores to use for test pmd> \
239+
--vdev="net_vdev_netvsc<id>,iface=<the iface to attach to>" \
240+
--vdev="net_vdev_netvsc<2nd id>,iface=<2nd iface to attach to>" (you need as many --vdev arguments as the number of devices used by testpmd, in this case) \
241+
 -- --nb-cores <number of cores to use for test pmd> \
243242
--forward-mode=io \
244-
eth-peer=<recv port id>,<peer MAC address> \
243+
--eth-peer=<recv port id>,<sender peer MAC address> \
245244
--stats-period <display interval in seconds>
246245
```
247246

0 commit comments

Comments
 (0)