You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/virtual-network/setup-dpdk.md
+29-30Lines changed: 29 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -104,11 +104,10 @@ zypper \
104
104
## Setup virtual machine environment (once)
105
105
106
106
1.[Download the latest DPDK](https://core.dpdk.org/download). Version 18.02 or higher is required for Azure.
107
-
2. Install the *libnuma-dev* package with `sudo apt-get install libnuma-dev`.
108
-
3. First build the default config with `make config T=x86_64-native-linuxapp-gcc`.
109
-
4. Enable Mellanox PMDs in the generated config with `sed -ri 's,(MLX._PMD=)n,\1y,' build/.config`.
110
-
5. Compile with `make`.
111
-
6. Install with `make install DESTDIR=<output folder>`.
107
+
2. First build the default config with `make config T=x86_64-native-linuxapp-gcc`.
108
+
3. Enable Mellanox PMDs in the generated config with `sed -ri 's,(MLX._PMD=)n,\1y,' build/.config`.
109
+
4. Compile with `make`.
110
+
5. Install with `make install DESTDIR=<output folder>`.
112
111
113
112
# Configure runtime environment
114
113
@@ -130,14 +129,14 @@ Run the following commands once, after rebooting:
130
129
> [!NOTE]
131
130
> There is a way to modify the grub file so that huge pages are reserved on boot by following the [instructions](http://dpdk.org/doc/guides/linux_gsg/sys_reqs.html#use-of-hugepages-in-the-linux-environment) forDPDK. The instruction is at the bottom of the page. When runningin an Azure Linux virtual machine, modify files under /etc/config/grub.d instead, to reserve hugepages across reboots.
132
131
133
-
2. MAC & IP addresses: Use `ifconfig –a` to view the MAC and IP address of the network interfaces. The *VF* network interface and *NETVSC* network interface have the same MAC address, but only the *NETVSC* network interface has an IP address.
132
+
2. MAC & IP addresses: Use `ifconfig –a` to view the MAC and IP address of the network interfaces. The *VF* network interface and *NETVSC* network interface have the same MAC address, but only the *NETVSC* network interface has an IP address. VF interfaces are running as slave interfaces of NETVSC interfaces.
134
133
135
134
3. PCI addresses
136
135
137
136
* Find out which PCI address to use for*VF* with `ethtool -i <vf interface name>`.
138
137
* Ensure that testpmd doesn’t accidentally take over the VF pci device for*eth0*, if*eth0* has accelerated networking enabled. If DPDK application has accidentally taken over the management network interface and causes loss of your SSH connection, use the serial console to kill DPDK application, or to stop or start the virtual machine.
139
138
140
-
4. Load *ibuverbs* on each reboot with `modprobe -a ib_uverbs`. For SLES 15 only, load *mlx4_ib* with 'modprobe -a mlx4_ib'.
139
+
4. Load *ibuverbs* on each reboot with `modprobe -a ib_uverbs`. For SLES 15 only, also load *mlx4_ib* with `modprobe -a mlx4_ib`.
141
140
142
141
## Failsafe PMD
143
142
@@ -149,23 +148,23 @@ Use `sudo` before the *testpmd* command to run in root mode.
1. Run the following commands to start a single port application:
151
+
1. Run the following commands to start a single port testpmd application:
153
152
154
153
```bash
155
154
testpmd -w <pci address from previous step> \
156
155
--vdev="net_vdev_netvsc0,iface=eth1" \
157
-
-i \
156
+
-- -i \
158
157
--port-topology=chained
159
158
```
160
159
161
-
2. Run the following commands to start a dual port application:
160
+
2. Run the following commands to start a dual port testpmd application:
162
161
163
162
```bash
164
163
testpmd -w <pci address nic1> \
165
164
-w <pci address nic2> \
166
165
--vdev="net_vdev_netvsc0,iface=eth1" \
167
166
--vdev="net_vdev_netvsc1,iface=eth2" \
168
-
-i
167
+
-- -i
169
168
```
170
169
171
170
If running with more than 2 NICs, the `--vdev` argument follows this pattern: `net_vdev_netvsc<id>,iface=<vf’s pairing eth>`.
@@ -182,30 +181,30 @@ The following commands periodically print the packets per second statistics:
182
181
1. On the TX side, run the following command:
183
182
184
183
```bash
185
-
Testpmd \
186
-
–l <core-mask> \
184
+
testpmd \
185
+
-l <core-list> \
187
186
-n <num of mem channels> \
188
187
-w <pci address of the device intended to use> \
189
-
--vdev=”net_vdev_netvsc<id>,iface=<the iface to attach to>” \
190
-
--port-topology=chained \
188
+
--vdev="net_vdev_netvsc<id>,iface=<the iface to attach to>" \
189
+
-- --port-topology=chained \
191
190
--nb-cores <number of cores to use fortest pmd> \
192
191
--forward-mode=txonly \
193
-
–eth-peer=<port id>,<peer MAC address> \
192
+
--eth-peer=<port id>,<receiver peer MAC address> \
194
193
--stats-period <display interval in seconds>
195
194
```
196
195
197
196
2. On the RX side, run the following command:
198
197
199
198
```bash
200
-
Testpmd \
201
-
–l <core-mask> \
199
+
testpmd \
200
+
-l <core-list> \
202
201
-n <num of mem channels> \
203
202
-w <pci address of the device intended to use> \
204
203
--vdev="net_vdev_netvsc<id>,iface=<the iface to attach to>" \
205
-
--port-topology=chained \
204
+
-- --port-topology=chained \
206
205
--nb-cores <number of cores to use fortest pmd> \
207
206
--forward-mode=rxonly \
208
-
–eth-peer=<port id>,<peer MAC address> \
207
+
--eth-peer=<port id>,<sender peer MAC address> \
209
208
--stats-period <display interval in seconds>
210
209
```
211
210
@@ -217,31 +216,31 @@ The following commands periodically print the packets per second statistics:
217
216
1. On the TX side, run the following command:
218
217
219
218
```bash
220
-
Testpmd \
221
-
–l <core-mask> \
219
+
testpmd \
220
+
-l <core-list> \
222
221
-n <num of mem channels> \
223
222
-w <pci address of the device intended to use> \
224
223
--vdev="net_vdev_netvsc<id>,iface=<the iface to attach to>" \
225
-
--port-topology=chained \
224
+
-- --port-topology=chained \
226
225
--nb-cores <number of cores to use fortest pmd> \
227
226
--forward-mode=txonly \
228
-
–eth-peer=<port id>,<peer MAC address> \
227
+
--eth-peer=<port id>,<receiver peer MAC address> \
229
228
--stats-period <display interval in seconds>
230
229
```
231
230
232
231
2. On the FWD side, run the following command:
233
232
234
233
```bash
235
-
Testpmd \
236
-
–l <core-mask> \
234
+
testpmd \
235
+
-l <core-list> \
237
236
-n <num of mem channels> \
238
237
-w <pci address NIC1> \
239
238
-w <pci address NIC2> \
240
-
--vdev=”net_vdev_netvsc<id>,iface=<the iface to attach to>” \
241
-
--vdev=”net_vdev_netvsc<2nd id>,iface=<2nd iface to attach to>” (you need as many --vdev arguments as the number of devices used by testpmd, in this case) \
242
-
--nb-cores <number of cores to use fortest pmd> \
239
+
--vdev="net_vdev_netvsc<id>,iface=<the iface to attach to>" \
240
+
--vdev="net_vdev_netvsc<2nd id>,iface=<2nd iface to attach to>" (you need as many --vdev arguments as the number of devices used by testpmd, in this case) \
241
+
-- --nb-cores <number of cores to use fortest pmd> \
243
242
--forward-mode=io \
244
-
–eth-peer=<recv port id>,<peer MAC address> \
243
+
--eth-peer=<recv port id>,<sender peer MAC address> \
0 commit comments