-
Notifications
You must be signed in to change notification settings - Fork 708
VPP Pure_L3_Container_Networking
Dave Wallace edited this page Apr 21, 2026
·
1 revision
This example shows how to configure VPP as an IPv4 router interconnecting containers across two hosts.
VPP itself runs in the root namespace, with a separate namespace for each container.
The basic setup is to use static ARPs and to unnumber the host interfaces from a loopback. From that perspective this is a hybrid of Methods 1 and 2 from Pure_L3_Between_Namespaces_with_slash32s
The two hosts are interconnected by a router. The router has /24 routes for the client subnets - pointed at the appropriate vSwitch interfaces.
On each host do:
sudo docker create -e MICROSERVICE_LABEL=vpp -it --privileged -v "/tmp/vpp_socket:/tmp" -p 5001:5002 -p 9191:9191 --name vpp --network=host contivvpp/vswitch
create file vpp.conf as follows:
unix {
nodaemon
cli-listen 0.0.0.0:5002
cli-no-pager
}
dpdk {
dev 0000:09:00.0 # replace this with an Ethernet interface on your host
uio-driver igb_uio
}
then:
sudo docker cp vpp.conf vpp:/etc/vpp/vpp.conf
sudo docker create -it --name client ubuntu
(for the server host change the name to "server")
from the Linux command line:
sudo docker start vpp client
export pid="$(sudo docker inspect -f 'Template:.State.Pid' "client")"
sudo ln -sf /proc/$pid/ns/net /var/run/netns/client
sudo ip link add name veth_client type veth peer name client
sudo ip link set dev client up
sudo ip link set dev veth_client up netns client
export mac="$(sudo docker exec client ifconfig veth_client | awk 'NR==1{print $5}')"
echo $mac
export vmac="$(printf '
set int ip address GigabitEthernet1/0/0 192.168.101.1/24\n
set int state GigabitEthernet1/0/0 up\n
create loopback interface\n
set int ip address loop0 192.168.200.1/24\n
set int state loop0 up\n
create host-interface name client\n
set int unnumbered host-client use loop0\n
set ip arp host-client 192.168.200.2 MAC\n
set int state host-client up\n
ip route add 192.168.200.2/32 via 192.168.200.2 host-client\n
ip route add 192.168.0.0/16 via 192.168.101.254 GigabitEthernet1/0/0\n
show hardware-interfaces host-client\n
quit' | sed s/MAC/$mac/ | nc 0 5002 | awk 'NR==29{print $3}')"
echo $vmac
sudo ip netns exec client ip link set dev lo up
sudo ip netns exec client ip addr add 192.168.200.2/32 dev veth_client
sudo ip netns exec client ip neigh add 192.168.200.1 lladdr $vmac dev veth_client
sudo ip netns exec client ip route add 192.168.200.1 dev veth_client scope link
sudo ip netns exec client ip route add 192.168.0.0/16 via 192.168.200.1 dev veth_client
sudo ip netns exec client ip route add 1.2.3.4/32 via 192.168.200.1 dev veth_client
sudo docker exec client ping -c 1 192.168.200.1
the ping should succeed
Again from the Linux command line:
sudo docker start vpp server
export pid="$(sudo docker inspect -f 'Template:.State.Pid' "server")"
sudo ln -sf /proc/$pid/ns/net /var/run/netns/server
sudo ip link add name veth_server type veth peer name server
sudo ip link set dev server up
sudo ip link set dev veth_server up netns server
export mac="$(sudo docker exec server ifconfig veth_server | awk 'NR==1{print $5}')"
echo $mac
export vmac="$(printf '
set int ip address GigabitEthernet1/0/0 192.168.103.1/24\n
set int state GigabitEthernet1/0/0 up\n
create loopback interface\n
set int ip address loop0 192.168.204.1/24\n
set int state loop0 up\n
create host-interface name server\n
set int unnumbered host-server use loop0\n
set ip arp host-server 192.168.204.2 MAC\n
set int state host-server up\n
ip route add 192.168.204.2/32 via 192.168.204.2 host-server\n
ip route add 192.168.0.0/16 via 192.168.103.254 GigabitEthernet1/0/0\n
show hardware-interfaces host-server\n
quit' | sed s/MAC/$mac/ | nc 0 5002 | awk 'NR==29{print $3}')"
echo $vmac
sudo ip netns exec server ip link set dev lo up
sudo ip netns exec server ip addr add 192.168.204.2/32 dev veth_server
sudo ip netns exec server ip neigh add 192.168.204.1 lladdr $vmac dev veth_server
sudo ip netns exec server ip route add 192.168.204.1 dev veth_server scope link
sudo ip netns exec server ip route add 192.168.0.0/16 via 192.168.204.1 dev veth_server
sudo docker exec server ping -c 1 192.168.204.1
again the ping should succeed
- VPP 2022 Make Test Use Case Poll
- VPP-AArch64
- VPP-ABF
- VPP Alternative Builds
- VPP API Concepts
- VPP API Versioning
- VPP-ApiChangeProcess
- VPP-ArtifactVersioning
- VPP-BIER
- VPP-Bihash
- VPP-BugReports
- VPP Build System Deep Dive
- VPP Build, Install, And Test Images
- VPP-BuildArtifactRetentionPolicy
- VPP-c2cpel
- VPP Code Walkthrough VoD
- VPP Code Walkthrough VoD Topic Index
- VPP Code Walkthrough VoDs
- VPP-CodeStyleConventions
- VPP-CodingTips
- VPP Command Line Arguments
- VPP Command Line Interface CLI Guide
- VPP-CommitMessages
- VPP-Committers-SMEs
- VPP-CommitterTasks-ApiFreeze
- VPP CommitterTasks Compare API Changes
- VPP-CommitterTasks-CutPointRelease
- VPP-CommitterTasks-CutRelease
- VPP-CommitterTasks-FinalReleaseCandidate
- VPP-CommitterTasks-PullThrottleBranch
- VPP-CommitterTasks-ReleasePlan
- VPP Configuration Tool
- VPP Configure An LW46 MAP E Terminator
- VPP Configure VPP As A Router Between Namespaces
- VPP Configure VPP TAP Interfaces For Container Routing
- VPP-CoreFileMismatch
- VPP-cpel
- VPP-cpeldump
- VPP-CurrentData
- VPP-DHCPKit
- VPP-DHCPv6
- VPP-DistributedOwnership
- VPP-Documentation
- VPP DPOs And Feature Arcs
- VPP EC2 Instance With SRIOV
- VPP-elog
- VPP-FAQ
- VPP Feature Arcs
- VPP-Features
- VPP-Features-IPv6
- VPP-FIB
- VPP-g2
- VPP Getting VPP 16.06
- VPP Getting VPP Release Binaries
- VPP-HA
- VPP-HostStack
- VPP-HostStack-BuiltinEchoClientServer
- VPP-HostStack-EchoClientServer
- VPP-HostStack-ExternalEchoClientServer
- VPP HostStack Hs Test
- VPP-HostStack-LDP-iperf
- VPP-HostStack-LDP-nginx
- VPP-HostStack-LDP-sshd
- VPP-HostStack-nginx
- VPP-HostStack-SessionLayerArchitecture
- VPP-HostStack-TestHttpServer
- VPP-HostStack-TestProxy
- VPP-HostStack-TLS
- VPP-HostStack-VCL
- VPP-HostStack-VclEchoClientServer
- VPP-Hotplug
- VPP How To Add A Tunnel Encapsulation
- VPP How To Build The Sample Plugin
- VPP How To Connect A PCI Interface To VPP
- VPP How To Create A VPP Binary Control Plane API
- VPP How To Deploy VPP In EC2 Instance And Use It To Connect Two Different VPCs
- VPP How To Optimize Performance %28System Tuning%29
- VPP How To Use The API Trace Tools
- VPP How To Use The C API
- VPP How To Use The Packet Generator And Packet Tracer
- VPP-Howtos
- VPP-index
- VPP Installing VPP Binaries From Packages
- VPP Interconnecting vRouters With VPP
- VPP Introduction To IP Adjacency
- VPP Introduction To N Tuple Classifiers
- VPP IP Adjacency Introduction
- VPP-IPFIX
- VPP-IPSec
- VPP IPSec And IKEv2
- VPP IPv6 SR VIRL Topology File
- VPP Java API
- VPP Java API Plugin Support
- VPP Jira Workflow
- VPP-Macswapplugin
- VPP-MakeTestFramework
- VPP-Meeting
- VPP-MFIB
- VPP Missing Prefetches
- VPP Modifying The Packet Processing Directed Graph
- VPP MPLS FIB
- VPP-NAT
- VPP Nataas Test
- VPP-OVN
- VPP Per Feature Notes
- VPP Performance Analysis Tools
- VPP-perftop
- VPP Progressive VPP Tutorial
- VPP Project Meeting Minutes
- VPP Pulling, Building, Running, Hacking And Pushing VPP Code
- VPP Pure L3 Between Namespaces With 32s
- VPP Pure L3 Container Networking
- VPP Pushing And Testing A Tag
- VPP Python API
- VPP-PythonVersionPolicy
- VPP-QuickTrexSetup
- VPP Random Hints And Kinks For KVM Usage
- VPP Release Plans Release Plan 16.09
- VPP Release Plans Release Plan 17.01
- VPP Release Plans Release Plan 17.04
- VPP Release Plans Release Plan 17.07
- VPP Release Plans Release Plan 17.10
- VPP Release Plans Release Plan 18.01
- VPP Release Plans Release Plan 18.04
- VPP Release Plans Release Plan 18.07
- VPP Release Plans Release Plan 18.10
- VPP Release Plans Release Plan 19.01
- VPP Release Plans Release Plan 19.04
- VPP Release Plans Release Plan 19.08
- VPP Release Plans Release Plan 20.01
- VPP Release Plans Release Plan 20.05
- VPP Release Plans Release Plan 20.09
- VPP Release Plans Release Plan 21.01
- VPP Release Plans Release Plan 21.06
- VPP Release Plans Release Plan 21.10
- VPP Release Plans Release Plan 22.02
- VPP Release Plans Release Plan 22.06
- VPP Release Plans Release Plan 22.10
- VPP Release Plans Release Plan 23.02
- VPP Release Plans Release Plan 23.06
- VPP Release Plans Release Plan 23.10
- VPP Release Plans Release Plan 24.02
- VPP Release Plans Release Plan 24.06
- VPP Release Plans Release Plan 24.10
- VPP Release Plans Release Plan 25.02
- VPP Release Plans Release Plan 25.06
- VPP Release Plans Release Plan 25.10
- VPP Release Plans Release Plan 26.02
- VPP Release Plans Release Plan 26.06
- VPP-RM
- VPP-SecurityGroups
- VPP Segment Routing For IPv6
- VPP Segment Routing For MPLS
- VPP Setting Up Your Dev Environment
- VPP-SNAT
- VPP Software Architecture
- VPP STN Testing
- VPP The VPP API
- VPP Training Events
- VPP-Troubleshooting
- VPP-Troubleshooting-BuildIssues
- VPP-Troubleshooting-Vagrant
- VPP Tutorial DPDK And MacSwap
- VPP Tutorial Routing And Switching
- VPP-Tutorials
- VPP Use VPP To Chain VMs Using Vhost User Interface
- VPP Use VPP To Connect VMs Using Vhost User Interface
- VPP Using mTCP User Mode TCP Stack With VPP
- VPP Using VPP As A VXLAN Tunnel Terminator
- VPP Using VPP In A Multi Thread Model
- VPP-VOM
- VPP VPP BFD Nexus
- VPP VPP Home Gateway
- VPP VPP WIKI DEPRECATED CONTENT
- VPP-VPPCommunicationsLibrary
- VPP-VPPConfig
- VPP What Is ODP4VPP
- VPP What Is VPP
- VPP Working Environments
- VPP Working With The 16.06 Throttle Branch