-
Notifications
You must be signed in to change notification settings - Fork 708
VPP Use_VPP_to_Chain_VMs_Using_Vhost_User_Interface
In all of the examples described below we are connecting a physical interface to a vhost-user interface which is consumed either by a VM image or a Clear Container. The following startup.conf can be used for all of the examples.
unix {
nodaemon
log /var/log/vpp/vpp.log
full-coredump
cli-listen localhost:5002
}
api-trace {
on
}
api-segment {
gid vpp
}
cpu {
main-core 1
corelist-workers 2-6
}
dpdk {
dev 0000:05:00.1
dev 0000:05:00.2
socket-mem 2048,0
no-multi-seg
}
Note the devices described for DPDK, 05:00.1 and 05:00.2. These are specific to the given test setup, describing the NIC provided on the particular host.
- 1 QEMU Instance with two VPP Vhost-User Interfaces
- 2 Two Chained QEMU Instances with VPP Vhost-User Interfaces
For performance testing, a useful topology is to test between two physical intefaces and through a VM. In our first example, let's look at PHY-> Vhostuser VM -> L2FWD -> Vhostuser VM -> PHY. To achieve this, we'll need two things: a VM with two vhost user interfaces, and a DPDK enabled VM to run a simple port to port forwarding (such as testpmd).
-
- View interfaces and put into 'up' state
vpp# show interfaces
Name Idx State Counter Count
TenGigabitEthernet5/0/1 1 down
TenGigabitEthernet5/0/2 2 down
vpp# set interface state TenGigabitEthernet5/0/1 up
vpp# set interface state TenGigabitEthernet5/0/2 up
vpp# show interfaces
Name Idx State Counter Count
TenGigabitEthernet5/0/1 1 up
TenGigabitEthernet5/0/2 2 up
-
- Connect each physical interface to an L2 bridge
vpp# set interface l2 bridge TenGigabitEthernet5/0/1 1
vpp# set interface l2 bridge TenGigabitEthernet5/0/2 2
-
- Create, bring up and add vhost-user interfaces to L2 bridges
vpp# create vhost-user socket /var/run/vpp/sock1.sock server
VirtualEthernet0/0/0
vpp# create vhost-user socket /var/run/vpp/sock2.sock server
VirtualEthernet0/0/1
vpp# set interface state VirtualEthernet0/0/0 up
vpp# set interface state VirtualEthernet0/0/1 up
vpp# set interface l2 bridge VirtualEthernet0/0/0 1
vpp# set interface l2 bridge VirtualEthernet0/0/1 2
vpp# show interface
Name Idx State Counter Count
TenGigabitEthernet5/0/1 1 up
TenGigabitEthernet5/0/2 2 up
VirtualEthernet0/0/0 3 up
VirtualEthernet0/0/1 4 up
local0 0 down
-
- Show resulting bridge setup
vpp# show bridge 1 detail
ID Index Learning U-Forwrd UU-Flood Flooding ARP-Term BVI-Intf
1 1 on on on on off N/A
Interface Index SHG BVI TxFlood VLAN-Tag-Rewrite
TenGigabitEthernet5/0/1 1 0 - * none
VirtualEthernet0/0/0 3 0 - * none
vpp# show bridge 2 detail
ID Index Learning U-Forwrd UU-Flood Flooding ARP-Term BVI-Intf
2 2 on on on on off N/A
Interface Index SHG BVI TxFlood VLAN-Tag-Rewrite
TenGigabitEthernet5/0/2 2 0 - * none
VirtualEthernet0/0/1 4 0 - * none
taskset 3C0 qemu-system-x86_64 \
-enable-kvm -m 8192 -smp cores=4,threads=0,sockets=1 -cpu host \
-drive file="ubuntu-16.04-server-cloudimg-amd64-disk1.img",if=virtio,aio=threads \
-drive file="seed.img",if=virtio,aio=threads \
-nographic -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem \
-mem-prealloc \
-chardev socket,id=char1,path=/var/run/vpp/sock1.sock \
-netdev type=vhost-user,id=net1,chardev=char1,vhostforce \
-device virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,mrg_rxbuf=off \
-chardev socket,id=char2,path=/var/run/vpp/sock2.sock \
-netdev type=vhost-user,id=net2,chardev=char2,vhostforce \
-device virtio-net-pci,netdev=net2,mac=00:00:00:00:00:02,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,mrg_rxbuf=off
Taking the prior example further, let's look at chaining two QEMU instances together so we have PHY VM VM PHY with connectivity provided by VPP L2 bridges. Since each bridge has only two interfaces attached to it, an optimization would be to make use of VPP L2 xconnect.
# bring up the physical endpoints
sudo vppctl set interface state TenGigabitEthernet5/0/1 up
sudo vppctl set interface state TenGigabitEthernet5/0/2 up
sudo vppctl show interfaces
# connect each physical interface to an L2 bridge
sudo vppctl set interface l2 bridge TenGigabitEthernet5/0/1 1
sudo vppctl set interface l2 bridge TenGigabitEthernet5/0/2 3
# Create and bring up three vhost-user interfaces:
sudo vppctl create vhost-user socket /var/run/vpp/sock1.sock server
sudo vppctl create vhost-user socket /var/run/vpp/sock2.sock server
sudo vppctl create vhost-user socket /var/run/vpp/sock3.sock server
sudo vppctl create vhost-user socket /var/run/vpp/sock4.sock server
sudo vppctl set interface state VirtualEthernet0/0/0 up
sudo vppctl set interface state VirtualEthernet0/0/1 up
sudo vppctl set interface state VirtualEthernet0/0/2 up
sudo vppctl set interface state VirtualEthernet0/0/3 up
#Add virtual interfaces to L2 bridges:
sudo vppctl set interface l2 bridge VirtualEthernet0/0/0 1
sudo vppctl set interface l2 bridge VirtualEthernet0/0/1 2
sudo vppctl set interface l2 bridge VirtualEthernet0/0/2 2
sudo vppctl set interface l2 bridge VirtualEthernet0/0/3 3
#Show bridge setup:
show bridge-domains 1 detail
show bridge-domains 2 detail
show bridge-domains 3 detail
- Launch first VM in chain:
taskset 3c000 qemu-system-x86_64 \
-enable-kvm -m 8192 -smp cores=4,threads=0,sockets=1 -cpu host \
-drive file="ubuntu-16.04-server-cloudimg-amd64-disk1.img",if=virtio,aio=threads \
-drive file="seed.img",if=virtio,aio=threads \
-nographic -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem \
-mem-prealloc \
-chardev socket,id=char1,path=/var/run/vpp/sock1.sock \
-netdev type=vhost-user,id=net1,chardev=char1,vhostforce \
-device virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,mrg_rxbuf=off \
-chardev socket,id=char2,path=/var/run/vpp/sock2.sock \
-netdev type=vhost-user,id=net2,chardev=char2,vhostforce \
-device virtio-net-pci,netdev=net2,mac=00:00:00:00:00:02,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,mrg_rxbuf=off
- Launch second VM in chain:
taskset 3c00 qemu-system-x86_64 \
-enable-kvm -m 8192 -smp cores=4,threads=0,sockets=1 -cpu host \
-drive file="ubuntu-16.04-server-cloudimg-amd64-disk1.img",if=virtio,aio=threads \
-drive file="seed.img",if=virtio,aio=threads \
-nographic -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem \
-mem-prealloc \
-chardev socket,id=char1,path=/var/run/vpp/sock3.sock \
-netdev type=vhost-user,id=net1,chardev=char1,vhostforce \
-device virtio-net-pci,netdev=net1,mac=00:00:00:00:00:03,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,mrg_rxbuf=off \
-chardev socket,id=char2,path=/var/run/vpp/sock4.sock \
-netdev type=vhost-user,id=net2,chardev=char2,vhostforce \
-device virtio-net-pci,netdev=net2,mac=00:00:00:00:00:04,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,mrg_rxbuf=off
- VPP 2022 Make Test Use Case Poll
- VPP-AArch64
- VPP-ABF
- VPP Alternative Builds
- VPP API Concepts
- VPP API Versioning
- VPP-ApiChangeProcess
- VPP-ArtifactVersioning
- VPP-BIER
- VPP-Bihash
- VPP-BugReports
- VPP Build System Deep Dive
- VPP Build, Install, And Test Images
- VPP-BuildArtifactRetentionPolicy
- VPP-c2cpel
- VPP Code Walkthrough VoD
- VPP Code Walkthrough VoD Topic Index
- VPP Code Walkthrough VoDs
- VPP-CodeStyleConventions
- VPP-CodingTips
- VPP Command Line Arguments
- VPP Command Line Interface CLI Guide
- VPP-CommitMessages
- VPP-Committers-SMEs
- VPP-CommitterTasks-ApiFreeze
- VPP CommitterTasks Compare API Changes
- VPP-CommitterTasks-CutPointRelease
- VPP-CommitterTasks-CutRelease
- VPP-CommitterTasks-FinalReleaseCandidate
- VPP-CommitterTasks-PullThrottleBranch
- VPP-CommitterTasks-ReleasePlan
- VPP Configuration Tool
- VPP Configure An LW46 MAP E Terminator
- VPP Configure VPP As A Router Between Namespaces
- VPP Configure VPP TAP Interfaces For Container Routing
- VPP-CoreFileMismatch
- VPP-cpel
- VPP-cpeldump
- VPP-CurrentData
- VPP-DHCPKit
- VPP-DHCPv6
- VPP-DistributedOwnership
- VPP-Documentation
- VPP DPOs And Feature Arcs
- VPP EC2 Instance With SRIOV
- VPP-elog
- VPP-FAQ
- VPP Feature Arcs
- VPP-Features
- VPP-Features-IPv6
- VPP-FIB
- VPP-g2
- VPP Getting VPP 16.06
- VPP Getting VPP Release Binaries
- VPP-HA
- VPP-HostStack
- VPP-HostStack-BuiltinEchoClientServer
- VPP-HostStack-EchoClientServer
- VPP-HostStack-ExternalEchoClientServer
- VPP HostStack Hs Test
- VPP-HostStack-LDP-iperf
- VPP-HostStack-LDP-nginx
- VPP-HostStack-LDP-sshd
- VPP-HostStack-nginx
- VPP-HostStack-SessionLayerArchitecture
- VPP-HostStack-TestHttpServer
- VPP-HostStack-TestProxy
- VPP-HostStack-TLS
- VPP-HostStack-VCL
- VPP-HostStack-VclEchoClientServer
- VPP-Hotplug
- VPP How To Add A Tunnel Encapsulation
- VPP How To Build The Sample Plugin
- VPP How To Connect A PCI Interface To VPP
- VPP How To Create A VPP Binary Control Plane API
- VPP How To Deploy VPP In EC2 Instance And Use It To Connect Two Different VPCs
- VPP How To Optimize Performance %28System Tuning%29
- VPP How To Use The API Trace Tools
- VPP How To Use The C API
- VPP How To Use The Packet Generator And Packet Tracer
- VPP-Howtos
- VPP-index
- VPP Installing VPP Binaries From Packages
- VPP Interconnecting vRouters With VPP
- VPP Introduction To IP Adjacency
- VPP Introduction To N Tuple Classifiers
- VPP IP Adjacency Introduction
- VPP-IPFIX
- VPP-IPSec
- VPP IPSec And IKEv2
- VPP IPv6 SR VIRL Topology File
- VPP Java API
- VPP Java API Plugin Support
- VPP Jira Workflow
- VPP-Macswapplugin
- VPP-MakeTestFramework
- VPP-Meeting
- VPP-MFIB
- VPP Missing Prefetches
- VPP Modifying The Packet Processing Directed Graph
- VPP MPLS FIB
- VPP-NAT
- VPP Nataas Test
- VPP-OVN
- VPP Per Feature Notes
- VPP Performance Analysis Tools
- VPP-perftop
- VPP Progressive VPP Tutorial
- VPP Project Meeting Minutes
- VPP Pulling, Building, Running, Hacking And Pushing VPP Code
- VPP Pure L3 Between Namespaces With 32s
- VPP Pure L3 Container Networking
- VPP Pushing And Testing A Tag
- VPP Python API
- VPP-PythonVersionPolicy
- VPP-QuickTrexSetup
- VPP Random Hints And Kinks For KVM Usage
- VPP Release Plans Release Plan 16.09
- VPP Release Plans Release Plan 17.01
- VPP Release Plans Release Plan 17.04
- VPP Release Plans Release Plan 17.07
- VPP Release Plans Release Plan 17.10
- VPP Release Plans Release Plan 18.01
- VPP Release Plans Release Plan 18.04
- VPP Release Plans Release Plan 18.07
- VPP Release Plans Release Plan 18.10
- VPP Release Plans Release Plan 19.01
- VPP Release Plans Release Plan 19.04
- VPP Release Plans Release Plan 19.08
- VPP Release Plans Release Plan 20.01
- VPP Release Plans Release Plan 20.05
- VPP Release Plans Release Plan 20.09
- VPP Release Plans Release Plan 21.01
- VPP Release Plans Release Plan 21.06
- VPP Release Plans Release Plan 21.10
- VPP Release Plans Release Plan 22.02
- VPP Release Plans Release Plan 22.06
- VPP Release Plans Release Plan 22.10
- VPP Release Plans Release Plan 23.02
- VPP Release Plans Release Plan 23.06
- VPP Release Plans Release Plan 23.10
- VPP Release Plans Release Plan 24.02
- VPP Release Plans Release Plan 24.06
- VPP Release Plans Release Plan 24.10
- VPP Release Plans Release Plan 25.02
- VPP Release Plans Release Plan 25.06
- VPP Release Plans Release Plan 25.10
- VPP Release Plans Release Plan 26.02
- VPP Release Plans Release Plan 26.06
- VPP-RM
- VPP-SecurityGroups
- VPP Segment Routing For IPv6
- VPP Segment Routing For MPLS
- VPP Setting Up Your Dev Environment
- VPP-SNAT
- VPP Software Architecture
- VPP STN Testing
- VPP The VPP API
- VPP Training Events
- VPP-Troubleshooting
- VPP-Troubleshooting-BuildIssues
- VPP-Troubleshooting-Vagrant
- VPP Tutorial DPDK And MacSwap
- VPP Tutorial Routing And Switching
- VPP-Tutorials
- VPP Use VPP To Chain VMs Using Vhost User Interface
- VPP Use VPP To Connect VMs Using Vhost User Interface
- VPP Using mTCP User Mode TCP Stack With VPP
- VPP Using VPP As A VXLAN Tunnel Terminator
- VPP Using VPP In A Multi Thread Model
- VPP-VOM
- VPP VPP BFD Nexus
- VPP VPP Home Gateway
- VPP VPP WIKI DEPRECATED CONTENT
- VPP-VPPCommunicationsLibrary
- VPP-VPPConfig
- VPP What Is ODP4VPP
- VPP What Is VPP
- VPP Working Environments
- VPP Working With The 16.06 Throttle Branch

