-
Notifications
You must be signed in to change notification settings - Fork 707
VPP Configure_VPP_TAP_Interfaces_For_Container_Routing
This example shows configuration of VPP as an IPv4 router between 2 docker containers and the host.
As docker simply consumes Linux Network Namespaces, the example can easily be translated to non-docker NetNS usecases. The setup demonstrates VPP's ability to dynamically create linux tap interfaces and can be considered an alternative to the existing 'veth' (AF_PACKET) example here: VPP/Configure_VPP_As_A_Router_Between_Namespaces
- 1 Setup
- 2 Running the Example
- 3 Explore the Environment
- 4 Run other commands in the containers
- 5 Non-Vagrant environments
- 6 Adding a new tap interface
The Following script, when run on the FD.io Vagrant VM, will configure and install the relevant tools needed to create the environment described in the diagram below:
[![|/skins/common/images/magnify-clip.png]]
Example topology using VPP as a router between two containers.
To if you do not have a VPP vagrant/development VM, please see instructions here: VPP/Build,_install,_and_test_images
#!/bin/bash
exposedockernetns () {
if [ "$1" == "" ]; then
echo "usage: $0 <container_name>"
echo "Exposes the netns of a docker container to the host"
exit 1
fi
pid=`docker inspect -f '{{.State.Pid}}' $1`
ln -s /proc/$pid/ns/net /var/run/netns/$1
echo "netns of ${1} exposed as /var/run/netns/${1}"
echo "try: ip netns exec ${1} ip addr list"
return 0
}
dockerrmf () {
#Cleanup all containers on the host (dead or alive).
docker kill `docker ps --no-trunc -aq` ; docker rm `docker ps --no-trunc -aq`
}
#Vagrant build box initial setup
if [ -a /etc/apt/sources.list.d/docker.list ]
then
echo "Docker APT Sources already configured. Not setting up Docker on this Vagrant Box"
echo "Cleaning up containers from previous run..."
dockerrmf
else
mkdir /var/run/netns
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
sudo echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" > /etc/apt/sources.list.d/docker.list
sudo apt-get update
sudo apt-get install -y docker-engine tmux uuid
fi
#Cleanup any old VPP Engine config
rm -f /home/vagrant/vpe-host.conf
#Stop any existing vpp instances started via init
stop vpp
#Create VPE Engine config
cat >/home/vagrant/vpe-host.conf <<EOL
tap connect tapcontainer1
tap connect taphost
tap connect tapcontainer2
set int ip addr tap-0 192.168.1.1/24
set int ip addr tap-1 192.168.2.1/24
set int ip addr tap-2 192.168.3.1/24
set int state tap-0 up
set int state tap-1 up
set int state tap-2 up
EOL
#Remove old netns simlink
rm -Rf /var/run/netns/*
mkdir /var/run/netns
#Start VPE, use our config
vpe unix {cli-listen 0.0.0.0:5002 startup-config /home/vagrant/vpe-host.conf } dpdk {no-pci no-huge num-mbufs 8192}
#Wait for VPE to configure and bring up interfaces
sleep 5
#Add host TAP IP
ip addr add 192.168.2.2/24 dev taphost
#Create a docker container
docker pull ubuntu
docker run --name "hasvppinterface1" ubuntu sleep 30000 &
docker run --name "hasvppinterface2" ubuntu sleep 30000 &
#Wait
sleep 5
#Expose our container to the 'ip netns exec' tools
exposedockernetns hasvppinterface1
exposedockernetns hasvppinterface2
#Move the 'tapcontainer1+2 VPP linux tap interface's into container1+2's network namespace respectivley.
ip link set tapcontainer1 netns hasvppinterface1
ip link set tapcontainer2 netns hasvppinterface2
#Give our in-container TAP interface's IP addresses and bring them up. Add routes back to the host TAP's via VPP.
ip netns exec hasvppinterface1 ip addr add 192.168.1.2/24 dev tapcontainer1
ip netns exec hasvppinterface1 ip link set tapcontainer1 up
ip netns exec hasvppinterface1 ip route add 192.168.2.0/24 via 192.168.1.1
ip netns exec hasvppinterface1 ip route add 192.168.3.0/24 via 192.168.1.1
ip netns exec hasvppinterface2 ip addr add 192.168.3.2/24 dev tapcontainer2
ip netns exec hasvppinterface2 ip link set tapcontainer2 up
ip netns exec hasvppinterface2 ip route add 192.168.2.0/24 via 192.168.3.1
ip netns exec hasvppinterface2 ip route add 192.168.1.0/24 via 192.168.3.1
#Let the host also know howto get to the container TAP via VPE
ip route add 192.168.1.0/24 via 192.168.2.1
ip route add 192.168.3.0/24 via 192.168.2.1
#Block ICMP out of the default docker0 container interfaces to prevent false positive results
ip netns exec hasvppinterface1 iptables -A OUTPUT -p icmp -o eth0 -j REJECT
ip netns exec hasvppinterface2 iptables -A OUTPUT -p icmp -o eth0 -j REJECT
#TEST!
echo "Pinging container1 via host > HostVPE > Container1 TAP"
ping -c3 192.168.1.2
echo "Pinging container2 via host > HostVPE > Container2 TAP"
ping -c3 192.168.1.2
echo "Ping from container1 via TAP > HostVPP > Host"
docker exec hasvppinterface1 ping -c3 192.168.2.2
echo "Ping from container2 via TAP > HostVPP > Host"
docker exec hasvppinterface2 ping -c3 192.168.2.2
echo "Ping from Container1 to Container2 via TAP > HostVPP > TAP"
docker exec hasvppinterface1 ping -c3 192.168.3.2
- Access the FD.io build VM from your development host. Then sudo to root.
$ vagrant ssh
vagrant@localhost$ sudo su -
- Copy the script above into the VM (alternatively, place it in the vagrant directory of your cloned VPP repo and it will be available in the vagrant VM under the '/vagrant' directory.
- Run the script to install docker, set up VPP and configure the relevant interfaces for VPP into each container. Pings will then be run to verify connectivity through VPP to each container.
root@localhost# chmod +x /vagrant/thisscript.sh
root@localhost# /vagrant/thisscript.sh
The script output should have ended with successful pings between the two docker containers and the host as follows:
Pinging container1 via host > HostVPE > Container1 TAP
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=2 ttl=63 time=0.261 ms
64 bytes from 192.168.1.2: icmp_seq=3 ttl=63 time=0.119 ms
--- 192.168.1.2 ping statistics ---
3 packets transmitted, 2 received, 33% packet loss, time 2006ms
rtt min/avg/max/mdev = 0.119/0.190/0.261/0.071 ms
Pinging container2 via host > HostVPE > Container2 TAP
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=63 time=0.423 ms
64 bytes from 192.168.1.2: icmp_seq=2 ttl=63 time=0.101 ms
64 bytes from 192.168.1.2: icmp_seq=3 ttl=63 time=0.162 ms
--- 192.168.1.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.101/0.228/0.423/0.140 ms
Ping from container1 via TAP > HostVPP > Host
PING 192.168.2.2 (192.168.2.2) 56(84) bytes of data.
64 bytes from 192.168.2.2: icmp_seq=1 ttl=63 time=0.092 ms
64 bytes from 192.168.2.2: icmp_seq=2 ttl=63 time=0.177 ms
64 bytes from 192.168.2.2: icmp_seq=3 ttl=63 time=0.164 ms
--- 192.168.2.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.092/0.144/0.177/0.038 ms
Ping from container2 via TAP > HostVPP > Host
PING 192.168.2.2 (192.168.2.2) 56(84) bytes of data.
64 bytes from 192.168.2.2: icmp_seq=1 ttl=63 time=0.166 ms
64 bytes from 192.168.2.2: icmp_seq=2 ttl=63 time=0.122 ms
64 bytes from 192.168.2.2: icmp_seq=3 ttl=63 time=0.162 ms
--- 192.168.2.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.122/0.150/0.166/0.019 ms
Ping from Container1 to Container2 via TAP > HostVPP > TAP
PING 192.168.3.2 (192.168.3.2) 56(84) bytes of data.
64 bytes from 192.168.3.2: icmp_seq=1 ttl=63 time=0.103 ms
64 bytes from 192.168.3.2: icmp_seq=2 ttl=63 time=0.164 ms
64 bytes from 192.168.3.2: icmp_seq=3 ttl=63 time=0.289 ms
--- 192.168.3.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.103/0.185/0.289/0.078 ms
At this point, you can also verify we have two docker containers running and view their routing tables:
root@localhost:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
18ae72d3c11d ubuntu "sleep 30000" 55 seconds ago Up 54 seconds hasvppinterface2
f92742c796dd ubuntu "sleep 30000" 55 seconds ago Up 54 seconds hasvppinterface1
root@localhost:~# ip netns exec hasvppinterface1 ip route list
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.2
192.168.1.0/24 dev tapcontainer1 proto kernel scope link src 192.168.1.2
192.168.2.0/24 via 192.168.1.1 dev tapcontainer1
192.168.3.0/24 via 192.168.1.1 dev tapcontainer1
root@localhost:~# ip netns exec hasvppinterface2 ip route list
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.3
192.168.1.0/24 via 192.168.3.1 dev tapcontainer2
192.168.2.0/24 via 192.168.3.1 dev tapcontainer2
192.168.3.0/24 dev tapcontainer2 proto kernel scope link src 192.168.3.2
We can also connect to the VPP configuration interface (left exposed on localhost:5002 in this example) to show VPP interfaces, routing table and ARP entries as follows:
root@localhost:~# telnet localhost 5002
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
vpp# show interface
Name Idx State Counter Count
local0 0 down
pg/stream-0 1 down
pg/stream-1 2 down
pg/stream-2 3 down
pg/stream-3 4 down
tap-0 5 up rx packets 29
rx bytes 2458
tx packets 13
tx bytes 1162
drops 17
ip4 11
ip6 16
tap-1 6 up rx packets 21
rx bytes 1866
tx packets 12
tx bytes 1120
drops 9
ip4 12
ip6 8
tap-2 7 up rx packets 23
rx bytes 1926
tx packets 7
tx bytes 630
drops 16
ip4 6
ip6 16
vpp# show ip fib
Table 0, fib_index 0, flow hash: src dst sport dport proto
Destination Packets Bytes Adjacency
192.168.1.0/24 1 98 weight 1, index 3
arp tap-0 192.168.1.1/24
192.168.1.1/32 0 0 weight 1, index 4
local 192.168.1.1/24
192.168.1.2/32 5 490 weight 1, index 11
tap-0
IP4: e2:c9:1a:c6:5e:06 -> e2:c9:1a:c6:5e:06
192.168.2.0/24 0 0 weight 1, index 5
arp tap-1 192.168.2.1/24
192.168.2.1/32 0 0 weight 1, index 6
local 192.168.2.1/24
192.168.2.2/32 11 1078 weight 1, index 9
tap-1
IP4: 42:f0:8e:5f:65:64 -> 42:f0:8e:5f:65:64
192.168.3.0/24 0 0 weight 1, index 7
arp tap-2 192.168.3.1/24
192.168.3.1/32 0 0 weight 1, index 8
local 192.168.3.1/24
192.168.3.2/32 6 588 weight 1, index 10
tap-2
IP4: 8a:e2:eb:68:15:9d -> 8a:e2:eb:68:15:9d
vpp# show ip arp
Time FIB IP4 Stat Ethernet Interface
40.7780 0 192.168.1.2 e2:c9:1a:c6:5e:06 tap-0
34.7544 0 192.168.2.2 42:f0:8e:5f:65:64 tap-1
41.9191 0 192.168.3.2 8a:e2:eb:68:15:9d tap-2
vpp# quit
If you wish to run any other commands from within the VPP routed containers, you can create a shell to the container as follows:
root@localhost:~# docker exec -ti hasvppinterface1 bash
root@f92742c796dd:/# echo "hello from container1"
hello from container1
root@f92742c796dd:/# exit
root@localhost:~# docker exec -ti hasvppinterface2 bash
root@18ae72d3c11d:/# echo "hello from container2"
hello from container2
root@18ae72d3c11d:/# ip route list
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.3
192.168.1.0/24 via 192.168.3.1 dev tapcontainer2
192.168.2.0/24 via 192.168.3.1 dev tapcontainer2
192.168.3.0/24 dev tapcontainer2 proto kernel scope link src 192.168.3.2
The script used to create this demo is fairly well commented using very simple Linux shell commands, so should be easily re-produceable in other environments.
Unlike the AF_PACKET interfaces, which must be specified at VPP startup (no longer true, see 'create host-interface'), we can create a new linux TAP interface to VPP at any point through configuration. Still on the Vagrant SSH session, see the following example:
### Show current VPP and linux interfaces
root@localhost:~# vppctl show int
Name Idx State Counter Count
local0 0 down
pg/stream-0 1 down
pg/stream-1 2 down
pg/stream-2 3 down
pg/stream-3 4 down
tap-0 5 up rx packets 29
rx bytes 2458
tx packets 13
tx bytes 1162
drops 17
ip4 11
ip6 16
tap-1 6 up rx packets 21
rx bytes 1866
tx packets 12
tx bytes 1120
drops 9
ip4 12
ip6 8
tap-2 7 up rx packets 23
rx bytes 1926
tx packets 7
tx bytes 630
drops 16
ip4 6
ip6 16
root@localhost:~# ip link list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:b1:94:b1 brd ff:ff:ff:ff:ff:ff
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:53:6f:28:38 brd ff:ff:ff:ff:ff:ff
5: taphost: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4352 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 500
link/ether 42:f0:8e:5f:65:64 brd ff:ff:ff:ff:ff:ff
8: veth27cd9d5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
link/ether 72:7c:93:5f:a9:d8 brd ff:ff:ff:ff:ff:ff
10: veth9d6aa0f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
link/ether 26:32:9e:07:5a:e0 brd ff:ff:ff:ff:ff:ff
### Add a new tap interface
vppctl tap connect newtap
### Show interfaces on VPP and Linux
root@localhost:~# vppctl show interface
Name Idx State Counter Count
local0 0 down
pg/stream-0 1 down
pg/stream-1 2 down
pg/stream-2 3 down
pg/stream-3 4 down
tap-0 5 up rx packets 29
rx bytes 2458
tx packets 13
tx bytes 1162
drops 17
ip4 11
ip6 16
tap-1 6 up rx packets 21
rx bytes 1866
tx packets 12
tx bytes 1120
drops 9
ip4 12
ip6 8
tap-2 7 up rx packets 23
rx bytes 1926
tx packets 7
tx bytes 630
drops 16
ip4 6
ip6 16
tap-3 8 down rx packets 7
rx bytes 578
drops 7
root@localhost:~# ip link list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:b1:94:b1 brd ff:ff:ff:ff:ff:ff
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:53:6f:28:38 brd ff:ff:ff:ff:ff:ff
5: taphost: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4352 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 500
link/ether 42:f0:8e:5f:65:64 brd ff:ff:ff:ff:ff:ff
8: veth27cd9d5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
link/ether 72:7c:93:5f:a9:d8 brd ff:ff:ff:ff:ff:ff
10: veth9d6aa0f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
link/ether 26:32:9e:07:5a:e0 brd ff:ff:ff:ff:ff:ff
11: newtap: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4352 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 500
link/ether da:28:dd:bd:e2:93 brd ff:ff:ff:ff:ff:ff
- VPP 2022 Make Test Use Case Poll
- VPP-AArch64
- VPP-ABF
- VPP Alternative Builds
- VPP API Concepts
- VPP API Versioning
- VPP-ApiChangeProcess
- VPP-ArtifactVersioning
- VPP-BIER
- VPP-Bihash
- VPP-BugReports
- VPP Build System Deep Dive
- VPP Build, Install, And Test Images
- VPP-BuildArtifactRetentionPolicy
- VPP-c2cpel
- VPP Code Walkthrough VoD
- VPP Code Walkthrough VoD Topic Index
- VPP Code Walkthrough VoDs
- VPP-CodeStyleConventions
- VPP-CodingTips
- VPP Command Line Arguments
- VPP Command Line Interface CLI Guide
- VPP-CommitMessages
- VPP-Committers-SMEs
- VPP-CommitterTasks-ApiFreeze
- VPP CommitterTasks Compare API Changes
- VPP-CommitterTasks-CutPointRelease
- VPP-CommitterTasks-CutRelease
- VPP-CommitterTasks-FinalReleaseCandidate
- VPP-CommitterTasks-PullThrottleBranch
- VPP-CommitterTasks-ReleasePlan
- VPP Configuration Tool
- VPP Configure An LW46 MAP E Terminator
- VPP Configure VPP As A Router Between Namespaces
- VPP Configure VPP TAP Interfaces For Container Routing
- VPP-CoreFileMismatch
- VPP-cpel
- VPP-cpeldump
- VPP-CurrentData
- VPP-DHCPKit
- VPP-DHCPv6
- VPP-DistributedOwnership
- VPP-Documentation
- VPP DPOs And Feature Arcs
- VPP EC2 Instance With SRIOV
- VPP-elog
- VPP-FAQ
- VPP Feature Arcs
- VPP-Features
- VPP-Features-IPv6
- VPP-FIB
- VPP-g2
- VPP Getting VPP 16.06
- VPP Getting VPP Release Binaries
- VPP-HA
- VPP-HostStack
- VPP-HostStack-BuiltinEchoClientServer
- VPP-HostStack-EchoClientServer
- VPP-HostStack-ExternalEchoClientServer
- VPP HostStack Hs Test
- VPP-HostStack-LDP-iperf
- VPP-HostStack-LDP-nginx
- VPP-HostStack-LDP-sshd
- VPP-HostStack-nginx
- VPP-HostStack-SessionLayerArchitecture
- VPP-HostStack-TestHttpServer
- VPP-HostStack-TestProxy
- VPP-HostStack-TLS
- VPP-HostStack-VCL
- VPP-HostStack-VclEchoClientServer
- VPP-Hotplug
- VPP How To Add A Tunnel Encapsulation
- VPP How To Build The Sample Plugin
- VPP How To Connect A PCI Interface To VPP
- VPP How To Create A VPP Binary Control Plane API
- VPP How To Deploy VPP In EC2 Instance And Use It To Connect Two Different VPCs
- VPP How To Optimize Performance %28System Tuning%29
- VPP How To Use The API Trace Tools
- VPP How To Use The C API
- VPP How To Use The Packet Generator And Packet Tracer
- VPP-Howtos
- VPP-index
- VPP Installing VPP Binaries From Packages
- VPP Interconnecting vRouters With VPP
- VPP Introduction To IP Adjacency
- VPP Introduction To N Tuple Classifiers
- VPP IP Adjacency Introduction
- VPP-IPFIX
- VPP-IPSec
- VPP IPSec And IKEv2
- VPP IPv6 SR VIRL Topology File
- VPP Java API
- VPP Java API Plugin Support
- VPP Jira Workflow
- VPP-Macswapplugin
- VPP-MakeTestFramework
- VPP-Meeting
- VPP-MFIB
- VPP Missing Prefetches
- VPP Modifying The Packet Processing Directed Graph
- VPP MPLS FIB
- VPP-NAT
- VPP Nataas Test
- VPP-OVN
- VPP Per Feature Notes
- VPP Performance Analysis Tools
- VPP-perftop
- VPP Progressive VPP Tutorial
- VPP Project Meeting Minutes
- VPP Pulling, Building, Running, Hacking And Pushing VPP Code
- VPP Pure L3 Between Namespaces With 32s
- VPP Pure L3 Container Networking
- VPP Pushing And Testing A Tag
- VPP Python API
- VPP-PythonVersionPolicy
- VPP-QuickTrexSetup
- VPP Random Hints And Kinks For KVM Usage
- VPP Release Plans Release Plan 16.09
- VPP Release Plans Release Plan 17.01
- VPP Release Plans Release Plan 17.04
- VPP Release Plans Release Plan 17.07
- VPP Release Plans Release Plan 17.10
- VPP Release Plans Release Plan 18.01
- VPP Release Plans Release Plan 18.04
- VPP Release Plans Release Plan 18.07
- VPP Release Plans Release Plan 18.10
- VPP Release Plans Release Plan 19.01
- VPP Release Plans Release Plan 19.04
- VPP Release Plans Release Plan 19.08
- VPP Release Plans Release Plan 20.01
- VPP Release Plans Release Plan 20.05
- VPP Release Plans Release Plan 20.09
- VPP Release Plans Release Plan 21.01
- VPP Release Plans Release Plan 21.06
- VPP Release Plans Release Plan 21.10
- VPP Release Plans Release Plan 22.02
- VPP Release Plans Release Plan 22.06
- VPP Release Plans Release Plan 22.10
- VPP Release Plans Release Plan 23.02
- VPP Release Plans Release Plan 23.06
- VPP Release Plans Release Plan 23.10
- VPP Release Plans Release Plan 24.02
- VPP Release Plans Release Plan 24.06
- VPP Release Plans Release Plan 24.10
- VPP Release Plans Release Plan 25.02
- VPP Release Plans Release Plan 25.06
- VPP Release Plans Release Plan 25.10
- VPP Release Plans Release Plan 26.02
- VPP Release Plans Release Plan 26.06
- VPP-RM
- VPP-SecurityGroups
- VPP Segment Routing For IPv6
- VPP Segment Routing For MPLS
- VPP Setting Up Your Dev Environment
- VPP-SNAT
- VPP Software Architecture
- VPP STN Testing
- VPP The VPP API
- VPP Training Events
- VPP-Troubleshooting
- VPP-Troubleshooting-BuildIssues
- VPP-Troubleshooting-Vagrant
- VPP Tutorial DPDK And MacSwap
- VPP Tutorial Routing And Switching
- VPP-Tutorials
- VPP Use VPP To Chain VMs Using Vhost User Interface
- VPP Use VPP To Connect VMs Using Vhost User Interface
- VPP Using mTCP User Mode TCP Stack With VPP
- VPP Using VPP As A VXLAN Tunnel Terminator
- VPP Using VPP In A Multi Thread Model
- VPP-VOM
- VPP VPP BFD Nexus
- VPP VPP Home Gateway
- VPP VPP WIKI DEPRECATED CONTENT
- VPP-VPPCommunicationsLibrary
- VPP-VPPConfig
- VPP What Is ODP4VPP
- VPP What Is VPP
- VPP Working Environments
- VPP Working With The 16.06 Throttle Branch