-
Notifications
You must be signed in to change notification settings - Fork 707
VPP How_To_Connect_A_PCI_Interface_To_VPP
- 1 Introduction
- 2 Starting from Setting Up Your Dev Environment
- 3 Configuring VPP to use the additional NICs
- 4 Taking your new NICs for a spin
In this tutorial you will learn how to connect a PCI interface to VPP.
Starting from Setting Up Your Dev Environment
You can try this exercise using the Vagrant file provided in vpp/build-root/vagrant . To get started there, go to Setting Up Your Dev Environment (if you have not already).
Once you can get this Vagrant working, set the environment variable VPP_VAGRANT_NICS to the number of additional NICs you would like, in this tutorial, we will use the example of 1 additional NIC.
Example:
VPP_VAGRANT_NICS=1
If you have already created a VM for this Vagrant, you will need to destroy and recreate it for the changes to take effect:
export vagrant destroy -f;vagrant up
The Vagrant sets up additional NICs as 'DHCP'. This means they get an IP assigned by DHCP. You are going to want to capture the information about them so you can interact with the network they are connected to correctly.
Example:
vagrant@localhost:~$ ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:b1:94:b1 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:feb1:94b1/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:af:66:51 brd ff:ff:ff:ff:ff:ff
inet 172.28.128.5/24 brd 172.28.128.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:feaf:6651/64 scope link
valid_lft forever preferred_lft forever
So in this example we can pick off the ip addresses:
- eth1 - 172.28.128.5/24
We'll need to save those for assignment to the VPP interfaces later.
To 'whitelist' an interface with VPP (ie, tell it to grab that NIC) we need to first find the interfaces PCI address.
Example:
vagrant@localhost:~$ sudo lshw -class network -businfo
Bus info Device Class Description
===================================================
pci@0000:00:03.0 eth0 network 82540EM Gigabit Ethernet Controller
pci@0000:00:08.0 eth1 network 82540EM Gigabit Ethernet Controller
In this case we can see:
- eth1 - 0000:00:08.0
To configure VPP to use eth1 using DPDK, edit
/etc/vpp/startup.conf
And change its dpdk section to contain 'dev' entries for the PCI bus information you captured in the previous step.
Example:
dpdk {
socket-mem 1024
dev 0000:00:08.0
}
Restart VPP
sudo restart vpp
- PCI interfaces are not detected, do not show up in "show interface" of vpp or you see messages like the following when VPP is started:
0: dpdk_lib_init:308: DPDK drivers found no ports...
0: dpdk_lib_init:312: DPDK drivers found 0 ports...
1.1. Check if the interface you are trying to use is up/configured for use by the Linux kernel. If it is then shut it down: For e.g. If you want to use eth1 in vpp then:
# ifconfig eth1 down
# ip addr flush dev eth1
Restart VPP.
1.2. If the interface is down and unconfigured but does not show up in VPP, check the output of "show pci" in VPP:
vpp# show pci
Address Socket VID:PID Link Speed Driver Product Name
0000:08:00.0 0 1137:0043 5.0 GT/s x16
Load igb_uio driver manually or using DKMS and restart VPP:
# modprobe igb_uio
..Restart VPP..
vpp# show pci
Address Socket VID:PID Link Speed Driver Product Name
0000:08:00.0 0 1137:0043 5.0 GT/s x16 igb_uio
vpp# show int
Name Idx State Counter Count
TenGigabitEthernet8/0/0 1 down
vagrant@localhost:~$ sudo vppctl show int
Name Idx State Counter Count
GigabitEthernet0/8/0 5 down
local0 0 down
pg/stream-0 1 down
pg/stream-1 2 down
pg/stream-2 3 down
pg/stream-3 4 down
You can see the new interfaces:
- GigabitEthernet0/8/0 - corresponding to PCI address 0000:00:08.0 which corresponds to eth1
vagrant@localhost:~$ sudo vppctl set int ip address GigabitEthernet0/8/0 172.28.128.5/24
vagrant@localhost:~$ sudo vppctl set interface state GigabitEthernet0/8/0 up
To see that assignment
vagrant@localhost:~$ sudo vppctl show int address
GigabitEthernet0/8/0 (up):
172.28.128.5/24
local0 (dn):
pg/stream-0 (dn):
pg/stream-1 (dn):
pg/stream-2 (dn):
pg/stream-3 (dn):
To set up a trace:
vagrant@localhost:~$ sudo vppctl trace add dpdk-input 10
From your host:
ping -c 1 172.28.128.5
PING 172.28.128.5 (172.28.128.5): 56 data bytes
64 bytes from 172.28.128.5: icmp_seq=0 ttl=64 time=0.835 ms
--- 172.28.128.5 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.835/0.835/0.835/0.000 ms
vagrant@localhost:~$ sudo vppctl show trace
------------------- Start of thread 0 vpp_main -------------------
Packet 1
00:02:15:410299: dpdk-input
GigabitEthernet0/8/0 rx queue 0
buffer 0xae87: current data 0, length 60, free-list 0, totlen-nifb 0, trace 0x0
PKT MBUF: port 0, nb_segs 1, pkt_len 60
buf_len 2304, data_len 60, ol_flags 0x0,
packet_type 0x0
ARP: 0a:00:27:00:00:05 -> ff:ff:ff:ff:ff:ff
request, type ethernet/IP4, address size 6/4
0a:00:27:00:00:05/172.28.128.1 -> 00:00:00:00:00:00/172.28.128.5
00:02:15:410461: ethernet-input
ARP: 0a:00:27:00:00:05 -> ff:ff:ff:ff:ff:ff
00:02:15:410489: arp-input
request, type ethernet/IP4, address size 6/4
0a:00:27:00:00:05/172.28.128.1 -> 00:00:00:00:00:00/172.28.128.5
00:02:15:410569: GigabitEthernet0/8/0-output
GigabitEthernet0/8/0
ARP: 08:00:27:af:66:51 -> 0a:00:27:00:00:05
reply, type ethernet/IP4, address size 6/4
08:00:27:af:66:51/172.28.128.5 -> 0a:00:27:00:00:05/172.28.128.1
00:02:15:410576: GigabitEthernet0/8/0-tx
GigabitEthernet0/8/0 tx queue 0
buffer 0xae87: current data 0, length 60, free-list 0, totlen-nifb 0, trace 0x0
ARP: 08:00:27:af:66:51 -> 0a:00:27:00:00:05
reply, type ethernet/IP4, address size 6/4
08:00:27:af:66:51/172.28.128.5 -> 0a:00:27:00:00:05/172.28.128.1
Packet 2
00:02:15:410719: dpdk-input
GigabitEthernet0/8/0 rx queue 0
buffer 0xae60: current data 0, length 98, free-list 0, totlen-nifb 0, trace 0x1
PKT MBUF: port 0, nb_segs 1, pkt_len 98
buf_len 2304, data_len 98, ol_flags 0x0,
packet_type 0x0
IP4: 0a:00:27:00:00:05 -> 08:00:27:af:66:51
ICMP: 172.28.128.1 -> 172.28.128.5
tos 0x00, ttl 64, length 84, checksum 0xc442
fragment id 0x5e27
ICMP echo_request checksum 0xabfe
00:02:15:410774: ethernet-input
IP4: 0a:00:27:00:00:05 -> 08:00:27:af:66:51
00:02:15:410782: ip4-input
ICMP: 172.28.128.1 -> 172.28.128.5
tos 0x00, ttl 64, length 84, checksum 0xc442
fragment id 0x5e27
ICMP echo_request checksum 0xabfe
00:02:15:410799: ip4-local
fib: 0 adjacency: local 172.28.128.5/24 flow hash: 0x00000000
00:02:15:410805: ip4-icmp-input
ICMP: 172.28.128.1 -> 172.28.128.5
tos 0x00, ttl 64, length 84, checksum 0xc442
fragment id 0x5e27
ICMP echo_request checksum 0xabfe
00:02:15:410811: ip4-icmp-echo-request
ICMP: 172.28.128.1 -> 172.28.128.5
tos 0x00, ttl 64, length 84, checksum 0xc442
fragment id 0x5e27
ICMP echo_request checksum 0xabfe
00:02:15:410824: ip4-rewrite-local
fib: 0 adjacency: GigabitEthernet0/8/0
IP4: 08:00:27:af:66:51 -> 0a:00:27:00:00:05 flow hash: 0x00000000
IP4: 08:00:27:af:66:51 -> 0a:00:27:00:00:05
ICMP: 172.28.128.5 -> 172.28.128.1
tos 0x00, ttl 64, length 84, checksum 0x98d9
fragment id 0x8990
ICMP echo_reply checksum 0xb3fe
00:02:15:410827: GigabitEthernet0/8/0-output
GigabitEthernet0/8/0
IP4: 08:00:27:af:66:51 -> 0a:00:27:00:00:05
ICMP: 172.28.128.5 -> 172.28.128.1
tos 0x00, ttl 64, length 84, checksum 0x98d9
fragment id 0x8990
ICMP echo_reply checksum 0xb3fe
00:02:15:410830: GigabitEthernet0/8/0-tx
GigabitEthernet0/8/0 tx queue 0
buffer 0xae60: current data 0, length 98, free-list 0, totlen-nifb 0, trace 0x1
IP4: 08:00:27:af:66:51 -> 0a:00:27:00:00:05
ICMP: 172.28.128.5 -> 172.28.128.1
tos 0x00, ttl 64, length 84, checksum 0x98d9
fragment id 0x8990
ICMP echo_reply checksum 0xb3fe
vagrant@localhost:~$ sudo vppctl clear trace
- VPP 2022 Make Test Use Case Poll
- VPP-AArch64
- VPP-ABF
- VPP Alternative Builds
- VPP API Concepts
- VPP API Versioning
- VPP-ApiChangeProcess
- VPP-ArtifactVersioning
- VPP-BIER
- VPP-Bihash
- VPP-BugReports
- VPP Build System Deep Dive
- VPP Build, Install, And Test Images
- VPP-BuildArtifactRetentionPolicy
- VPP-c2cpel
- VPP Code Walkthrough VoD
- VPP Code Walkthrough VoD Topic Index
- VPP Code Walkthrough VoDs
- VPP-CodeStyleConventions
- VPP-CodingTips
- VPP Command Line Arguments
- VPP Command Line Interface CLI Guide
- VPP-CommitMessages
- VPP-Committers-SMEs
- VPP-CommitterTasks-ApiFreeze
- VPP CommitterTasks Compare API Changes
- VPP-CommitterTasks-CutPointRelease
- VPP-CommitterTasks-CutRelease
- VPP-CommitterTasks-FinalReleaseCandidate
- VPP-CommitterTasks-PullThrottleBranch
- VPP-CommitterTasks-ReleasePlan
- VPP Configuration Tool
- VPP Configure An LW46 MAP E Terminator
- VPP Configure VPP As A Router Between Namespaces
- VPP Configure VPP TAP Interfaces For Container Routing
- VPP-CoreFileMismatch
- VPP-cpel
- VPP-cpeldump
- VPP-CurrentData
- VPP-DHCPKit
- VPP-DHCPv6
- VPP-DistributedOwnership
- VPP-Documentation
- VPP DPOs And Feature Arcs
- VPP EC2 Instance With SRIOV
- VPP-elog
- VPP-FAQ
- VPP Feature Arcs
- VPP-Features
- VPP-Features-IPv6
- VPP-FIB
- VPP-g2
- VPP Getting VPP 16.06
- VPP Getting VPP Release Binaries
- VPP-HA
- VPP-HostStack
- VPP-HostStack-BuiltinEchoClientServer
- VPP-HostStack-EchoClientServer
- VPP-HostStack-ExternalEchoClientServer
- VPP HostStack Hs Test
- VPP-HostStack-LDP-iperf
- VPP-HostStack-LDP-nginx
- VPP-HostStack-LDP-sshd
- VPP-HostStack-nginx
- VPP-HostStack-SessionLayerArchitecture
- VPP-HostStack-TestHttpServer
- VPP-HostStack-TestProxy
- VPP-HostStack-TLS
- VPP-HostStack-VCL
- VPP-HostStack-VclEchoClientServer
- VPP-Hotplug
- VPP How To Add A Tunnel Encapsulation
- VPP How To Build The Sample Plugin
- VPP How To Connect A PCI Interface To VPP
- VPP How To Create A VPP Binary Control Plane API
- VPP How To Deploy VPP In EC2 Instance And Use It To Connect Two Different VPCs
- VPP How To Optimize Performance %28System Tuning%29
- VPP How To Use The API Trace Tools
- VPP How To Use The C API
- VPP How To Use The Packet Generator And Packet Tracer
- VPP-Howtos
- VPP-index
- VPP Installing VPP Binaries From Packages
- VPP Interconnecting vRouters With VPP
- VPP Introduction To IP Adjacency
- VPP Introduction To N Tuple Classifiers
- VPP IP Adjacency Introduction
- VPP-IPFIX
- VPP-IPSec
- VPP IPSec And IKEv2
- VPP IPv6 SR VIRL Topology File
- VPP Java API
- VPP Java API Plugin Support
- VPP Jira Workflow
- VPP-Macswapplugin
- VPP-MakeTestFramework
- VPP-Meeting
- VPP-MFIB
- VPP Missing Prefetches
- VPP Modifying The Packet Processing Directed Graph
- VPP MPLS FIB
- VPP-NAT
- VPP Nataas Test
- VPP-OVN
- VPP Per Feature Notes
- VPP Performance Analysis Tools
- VPP-perftop
- VPP Progressive VPP Tutorial
- VPP Project Meeting Minutes
- VPP Pulling, Building, Running, Hacking And Pushing VPP Code
- VPP Pure L3 Between Namespaces With 32s
- VPP Pure L3 Container Networking
- VPP Pushing And Testing A Tag
- VPP Python API
- VPP-PythonVersionPolicy
- VPP-QuickTrexSetup
- VPP Random Hints And Kinks For KVM Usage
- VPP Release Plans Release Plan 16.09
- VPP Release Plans Release Plan 17.01
- VPP Release Plans Release Plan 17.04
- VPP Release Plans Release Plan 17.07
- VPP Release Plans Release Plan 17.10
- VPP Release Plans Release Plan 18.01
- VPP Release Plans Release Plan 18.04
- VPP Release Plans Release Plan 18.07
- VPP Release Plans Release Plan 18.10
- VPP Release Plans Release Plan 19.01
- VPP Release Plans Release Plan 19.04
- VPP Release Plans Release Plan 19.08
- VPP Release Plans Release Plan 20.01
- VPP Release Plans Release Plan 20.05
- VPP Release Plans Release Plan 20.09
- VPP Release Plans Release Plan 21.01
- VPP Release Plans Release Plan 21.06
- VPP Release Plans Release Plan 21.10
- VPP Release Plans Release Plan 22.02
- VPP Release Plans Release Plan 22.06
- VPP Release Plans Release Plan 22.10
- VPP Release Plans Release Plan 23.02
- VPP Release Plans Release Plan 23.06
- VPP Release Plans Release Plan 23.10
- VPP Release Plans Release Plan 24.02
- VPP Release Plans Release Plan 24.06
- VPP Release Plans Release Plan 24.10
- VPP Release Plans Release Plan 25.02
- VPP Release Plans Release Plan 25.06
- VPP Release Plans Release Plan 25.10
- VPP Release Plans Release Plan 26.02
- VPP Release Plans Release Plan 26.06
- VPP-RM
- VPP-SecurityGroups
- VPP Segment Routing For IPv6
- VPP Segment Routing For MPLS
- VPP Setting Up Your Dev Environment
- VPP-SNAT
- VPP Software Architecture
- VPP STN Testing
- VPP The VPP API
- VPP Training Events
- VPP-Troubleshooting
- VPP-Troubleshooting-BuildIssues
- VPP-Troubleshooting-Vagrant
- VPP Tutorial DPDK And MacSwap
- VPP Tutorial Routing And Switching
- VPP-Tutorials
- VPP Use VPP To Chain VMs Using Vhost User Interface
- VPP Use VPP To Connect VMs Using Vhost User Interface
- VPP Using mTCP User Mode TCP Stack With VPP
- VPP Using VPP As A VXLAN Tunnel Terminator
- VPP Using VPP In A Multi Thread Model
- VPP-VOM
- VPP VPP BFD Nexus
- VPP VPP Home Gateway
- VPP VPP WIKI DEPRECATED CONTENT
- VPP-VPPCommunicationsLibrary
- VPP-VPPConfig
- VPP What Is ODP4VPP
- VPP What Is VPP
- VPP Working Environments
- VPP Working With The 16.06 Throttle Branch