You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/tutorial.md
+107-1Lines changed: 107 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,6 +22,7 @@ DPVS Tutorial
22
22
*[UDP Option of Address (UOA)](#uoa)
23
23
*[Launch DPVS in Virtual Machine (Ubuntu)](#Ubuntu16.04)
24
24
*[Traffic Control(TC)](#tc)
25
+
*[Multiple Instances](#multi-instance)
25
26
*[Debug DPVS](#debug)
26
27
-[Debug with Log](#debug-with-log)
27
28
-[Packet Capture and Tcpdump](#packet-capture)
@@ -1191,6 +1192,111 @@ worker_defs {
1191
1192
1192
1193
Please refer to doc [tc.md](tc.md).
1193
1194
1195
+
<a id='multi-instance'/>
1196
+
1197
+
# Multiple Instances
1198
+
1199
+
Generally, DPVS is a network process running on physical server which is usually equipped with dozens of CPUs and vast sufficient memory. DPVS is CPU/memory efficient, so the CPU/memory resources on a general physical server are usually far from fully used. Thus we may hope to run multiple independent DPVS instances on a server to make the most out of it. A DPVS instance may use 1~4 NIC ports, depending on if the ports are bonding and the network topology of two-arm or one-arm. Extra NICs are needed if we want to run multiple DPVS instances because one NIC port should be managed only by one DPVS instance. Now let's make insights into the details of multiple DPVS instances.
1200
+
1201
+
#### CPU Isolation
1202
+
1203
+
The CPUs used by DPVS are always busy loop. If a CPU is assigned to two DPVS instances simultaneously, then both instances are to suffer from dramatic processing delay. So different instances must run on different CPUs, which is achieved by the procedures below.
1204
+
1205
+
- Start DPVS with EAL options `-l CORELIST` or `--lcores COREMAP` or `-c COREMASK` to specify on which CPUs the instance is to run.
It's suggested we selectthe CPUs and NIC ports on the same numa node on numa-aware platform. Performance degrades if the NIC ports and CPUs of a DPVS instance are on different numa nodes.
1209
+
1210
+
#### Memory Isolation
1211
+
1212
+
As is known, DPVS takes advantage of hugepage memory. The hugepage memory of different DPVS instances can be isolated by using different memory mapping files. The DPDK EAL option `--file-prefix` specifies the name prefix of memory mapping file. Thus multiple DPVS instances can run simultaneously by specifying unique name prefixes of hugepage memory with this EAL option.
1213
+
1214
+
#### Process Isolation
1215
+
1216
+
* DPVS Process Isolation
1217
+
1218
+
Every DPVS instance must have an unique PID file, a config file, and an IPC socket file, which are specified by the following DPVS options respectively.
1219
+
1220
+
-p, --pid-file FILE
1221
+
-c, --conf FILE
1222
+
-x, --ipc-file FILE
1223
+
1224
+
For example,
1225
+
1226
+
```sh
1227
+
./bin/dpvs -c /etc/dpvs1.conf -p /var/run/dpvs1.pid -x /var/run/dpvs1.ipc -- --file-prefix=dpvs1 -a 0000:4b:00.0 -a 0000:4b:00.1 -l 0-8 --main-lcore 0
1228
+
```
1229
+
1230
+
* Keepalived Process Isolation
1231
+
1232
+
One DPVS instance corresponds to one keepalived instance, and vice versa. Similarly, different keepalived processes must have unique config files and PID files. Note that depending on the configurations, keepalived for DPVS may consist of 3 daemon processes, i.e, the main process, the health check subprocess, and the vrrp subprocess. The config files and PID files for different keepalived instances can be specified by the following options, respectively.
#### Talk to different DPVS instances with dpip/ipvsadm
1246
+
1247
+
`Dpip` and `ipvsadm` are the utility tools used to configure DPVS. By default, they works well on the single DPVS instance server without any extra settings. But on the multiple DPVS instance server, an envrionment variable `DPVS_IPC_FILE` should be preset as the DPVS's IPC socket file to which ipvsadm/dpip wants to talk. Refer to the the previous part "DPVS Process Isolation" for how to specify different IPC socket files for multiple DPVS instances. For example,
1248
+
1249
+
```sh
1250
+
DPVS_IPC_FILE=/var/run/dpvs1.ipc ipvsadm -ln
1251
+
# or equivalently,
1252
+
export DPVS_IPC_FILE=/var/run/dpvs1.ipc
1253
+
ipvsadm -ln
1254
+
```
1255
+
1256
+
#### NIC Ports, KNI and Routes
1257
+
1258
+
The multiple DPVS instances running on a server are independent, that is DPVS adopts the deployment model [Running Multiple Independent DPDK Applications](https://doc.dpdk.org/guides/prog_guide/multi_proc_support.html#running-multiple-independent-dpdk-applications), which requires the instances cannot share any NIC ports. We can use the EAL options "-a, --allow" or "-b, --block" to allow/disable the NIC ports for a DPVS instance. However, Linux KNI kernel module only supports one DPVS instance in a specific network namespace (refer to [kernel/linux/kni/kni_misc.c](https://github.com/DPDK/dpdk/tree/main/kernel/linux/kni)). Basically, DPVS provides two solutions to the problem.
1259
+
1260
+
* Solution 1: Disable KNI on all other DPVS instances except the first one. A global config item `kni` has been added to DPVS since now.
1261
+
1262
+
```
1263
+
# dpvs.conf
1264
+
global_defs {
1265
+
...
1266
+
<init> kni on <default on, on|off>
1267
+
...
1268
+
}
1269
+
```
1270
+
1271
+
* Solution 2: Run DPVS instances in different network namespaces. It also resolves the route conflicts for multiple KNI network ports of different DPVS instances. A typical procedure to run a DPVS instance in a network namespace is shown below.
1272
+
1273
+
Firstly, create a new network namespace, "dpvsns" for example.
1274
+
1275
+
```sh
1276
+
/usr/sbin/ip netns add dpvsns
1277
+
```
1278
+
Secondly, move the NIC ports for this DPVS instance to the newly created network namespace.
1279
+
1280
+
```sh
1281
+
/usr/sbin/ip link set eth1 netns dpvsns
1282
+
/usr/sbin/ip link set eth2 netns dpvsns
1283
+
/usr/sbin/ip link set eth3 netns dpvsns
1284
+
```
1285
+
Lastly, start DPVS and all its related processes (such as keepalived, routing daemon) in the network namespace.
1286
+
1287
+
```sh
1288
+
/usr/sbin/ip netns exec dpvsns ./bin/dpvs -c /etc/dpvs2.conf -p /var/run/dpvs2.pid -x /var/run/dpvs2.ipc -- --file-prefix=dpvs2 -a 0000:cb:00.1 -a 0000:ca:00.0 -a 0000:ca:00.1 -l 12-20 --main-lcore 12
For performance improvement, we can enable multiple kthread mode when multiple DPVS instances are deployed on a server. In this mode, each KNI port is processed by a dedicated kthread rather than a shared kthread.
The `dpdk-pdump` runs as a DPDK secondary process and is capable of enabling packet capture on dpdk ports. DPVS works as the primary process for dpdk-pdump, which shoud enable the packet capture framework by setting `global_defs/pdump` to be `on` in `/etc/dpvs.conf` when DPVS starts up.
1470
+
The `dpdk-pdump` runs as a DPDK secondary process and is capable of enabling packet capture on dpdk ports. DPVS works as the primary process fordpdk-pdump, which should enable the packet capture framework by setting `global_defs/pdump` to be `on`in`/etc/dpvs.conf` when DPVS starts up.
1365
1471
1366
1472
Refer to [dpdk-pdump doc](https://doc.dpdk.org/guides/tools/pdump.html) for its usage. DPVS extends dpdk-pdump with a [DPDK patch](../patch/dpdk-stable-18.11.2/0005-enable-pdump-and-change-dpdk-pdump-tool-for-dpvs.patch) to add some packet filtering features. Run `dpdk-pdump -- --help` to find all supported pdump params.
0 commit comments