-
Notifications
You must be signed in to change notification settings - Fork 708
VPP Modifying_The_Packet_Processing_Directed_Graph
- 1 Introduction
- 2 Per-interface RX Redirection Details
- 3 Capturing Packets with Particular Ethertypes
- 4 Capturing for-us Packets for Newly-implemented IP Protocols
- 5 Capturing ip-for-us Packets Sent to Specific UDP dst Ports
- 6 Dispatch Function Hackery
The VPP platform uses a frame dispatcher and a directed node graph to process packets. Teaching the VPP platform how to process new kinds of traffic - or how to process known traffic in a different way - boils down to modifying graph trajectories.
There are several ways to do this. At the simple end of the spectrum, vpp device drivers implement a per-interface RX redirect function. All packets from the indicated interface are delivered to the supplied node.
For-us packets sent to a specific (address family, protocol-ID) can be delivered to a specific graph node. Implementing a new IP protocol works this way.
Protocols such as vxlan which involve processing for-us packets sent to a specific udp dst port have a similar (completely straighforward) API.
One can add per-interface configurable feature arcs at specific points in the forwarding graph. Example: ip4-input and ip4-input-no-checksum share a set of per-configurable features. Depending on configuration, packets visit ip4-input -> ip4-lookup, ip4-input -> ip4-source-check-via-rx, ip4-input -> ip4-source-check-via-any, ip4-input -> ipsec-input-ip4, or combinations thereof.
Full N-tuple classification provides another degree of freedom.
The show vlib graph debug CLI command shows the entire graph. It's not unknown for graph arcs which "should have been created" to end up missing.
NOTE: Be aware of warning messages such as:
Complaints at application start time of the form "node XYZ not found" are not to be ignored.
If you see such a message, the forwarding graph will not have been correctly constructed. Crashes in vlib_get_next_frame (...) are almost always the result of missing successor nodes. Typos in VLIB_REGISTER_NODE(...) macros can easily cause this kind of issue.
Forgetting to include a VLIB_INIT_FUNCTION(...) macro is another way to prevent graph arcs or other kinds of registrations from occurring.
Use the vnet_hw_interface_rx_redirect_to_node API to direct all packets from a certain interface to the graph node of your choice, or to cancel redirection:
vnet_hw_interface_rx_redirect_to_node
(vnet_main, hw_if_index, my_graph_node.index /* redirect to my_graph_node */);
or
vnet_hw_interface_rx_redirect_to_node
(vnet_main, hw_if_index, ~0 /* disable redirection */);
The my_graph_node index might hand "uninteresting" traffic to the normal code path, specifically ethernet-input, ip4-input / ip4-input-no-checksum, or ip6-input.
If you need to build a "bump-in-the-wire" application, use the rx redirect API to capture input packets and send them to my_graph_node. The my_graph node index performs the required analysis, and sends traffic to the output node for a specific TX interface.
Registering a node to handle packets with specific, currently unimplemented Ethertypes is easy. The following registration would send Cisco Discovery Protocol (ethertype 0x2000) packets to cdp_input_node:
ethernet_register_input_type (vm, ETHERNET_TYPE_CDP, cdp_input_node.index);
The easiest way to replace the existing ip4, ip6, or mpls stacks would be to modify vpp/vnet/main.c so that it doesn't link the current stack(s) from vnet, and to supply functionally equivalent nodes with the same names. For performance, device drivers know how to send ip4 packets, for example to ip4-input-no-checksum. It would be best not to make things more difficult by insisting on other driver-to-stack entry-point names.
Take a look at the .../vpp/vnet/vnet/gre/node.c file. We register the gre_input_node index to capture IP_PROTOCOL_GRE packets through the ip4_register_protocol API:
ip4_register_protocol (IP_PROTOCOL_GRE, gre_input_node.index);
The ip4 and ip6 implementations are quite symmetrical, so it makes sense that there is an ip6_register_protocol API. Here's an example from the .../vpp/vnet/vnet/l2tp/decap.c source file:
ip6_register_protocol (IP_PROTOCOL_L2TP, l2t_decap_node.index);
Use the udp_register_dst_port API. Here's an example from .../vpp/vnet/vnet/vxlan/vxlan.c, which sends ip4-udp-for-us packets sent to udp dst port 4789 to the vxlan_input_node:
udp_register_dst_port (vm, UDP_DST_PORT_vxlan,
vxlan_input_node.index, 1 /* is_ip4 */);
ip4 and ip6 have separate local-stack udp dst port decode tables.
It may seem attractive to change the vlib_node_runtime_t function pointer in arbitrary ways. One can imagine adding an API to track down the entire set of graph / node runtime replicas, change the dispatch function, and return the previous dispatch function. In a multi-threaded case, the node graph will have been replicated: vlib_update_node_runtime(...).
- VPP 2022 Make Test Use Case Poll
- VPP-AArch64
- VPP-ABF
- VPP Alternative Builds
- VPP API Concepts
- VPP API Versioning
- VPP-ApiChangeProcess
- VPP-ArtifactVersioning
- VPP-BIER
- VPP-Bihash
- VPP-BugReports
- VPP Build System Deep Dive
- VPP Build, Install, And Test Images
- VPP-BuildArtifactRetentionPolicy
- VPP-c2cpel
- VPP Code Walkthrough VoD
- VPP Code Walkthrough VoD Topic Index
- VPP Code Walkthrough VoDs
- VPP-CodeStyleConventions
- VPP-CodingTips
- VPP Command Line Arguments
- VPP Command Line Interface CLI Guide
- VPP-CommitMessages
- VPP-Committers-SMEs
- VPP-CommitterTasks-ApiFreeze
- VPP CommitterTasks Compare API Changes
- VPP-CommitterTasks-CutPointRelease
- VPP-CommitterTasks-CutRelease
- VPP-CommitterTasks-FinalReleaseCandidate
- VPP-CommitterTasks-PullThrottleBranch
- VPP-CommitterTasks-ReleasePlan
- VPP Configuration Tool
- VPP Configure An LW46 MAP E Terminator
- VPP Configure VPP As A Router Between Namespaces
- VPP Configure VPP TAP Interfaces For Container Routing
- VPP-CoreFileMismatch
- VPP-cpel
- VPP-cpeldump
- VPP-CurrentData
- VPP-DHCPKit
- VPP-DHCPv6
- VPP-DistributedOwnership
- VPP-Documentation
- VPP DPOs And Feature Arcs
- VPP EC2 Instance With SRIOV
- VPP-elog
- VPP-FAQ
- VPP Feature Arcs
- VPP-Features
- VPP-Features-IPv6
- VPP-FIB
- VPP-g2
- VPP Getting VPP 16.06
- VPP Getting VPP Release Binaries
- VPP-HA
- VPP-HostStack
- VPP-HostStack-BuiltinEchoClientServer
- VPP-HostStack-EchoClientServer
- VPP-HostStack-ExternalEchoClientServer
- VPP HostStack Hs Test
- VPP-HostStack-LDP-iperf
- VPP-HostStack-LDP-nginx
- VPP-HostStack-LDP-sshd
- VPP-HostStack-nginx
- VPP-HostStack-SessionLayerArchitecture
- VPP-HostStack-TestHttpServer
- VPP-HostStack-TestProxy
- VPP-HostStack-TLS
- VPP-HostStack-VCL
- VPP-HostStack-VclEchoClientServer
- VPP-Hotplug
- VPP How To Add A Tunnel Encapsulation
- VPP How To Build The Sample Plugin
- VPP How To Connect A PCI Interface To VPP
- VPP How To Create A VPP Binary Control Plane API
- VPP How To Deploy VPP In EC2 Instance And Use It To Connect Two Different VPCs
- VPP How To Optimize Performance %28System Tuning%29
- VPP How To Use The API Trace Tools
- VPP How To Use The C API
- VPP How To Use The Packet Generator And Packet Tracer
- VPP-Howtos
- VPP-index
- VPP Installing VPP Binaries From Packages
- VPP Interconnecting vRouters With VPP
- VPP Introduction To IP Adjacency
- VPP Introduction To N Tuple Classifiers
- VPP IP Adjacency Introduction
- VPP-IPFIX
- VPP-IPSec
- VPP IPSec And IKEv2
- VPP IPv6 SR VIRL Topology File
- VPP Java API
- VPP Java API Plugin Support
- VPP Jira Workflow
- VPP-Macswapplugin
- VPP-MakeTestFramework
- VPP-Meeting
- VPP-MFIB
- VPP Missing Prefetches
- VPP Modifying The Packet Processing Directed Graph
- VPP MPLS FIB
- VPP-NAT
- VPP Nataas Test
- VPP-OVN
- VPP Per Feature Notes
- VPP Performance Analysis Tools
- VPP-perftop
- VPP Progressive VPP Tutorial
- VPP Project Meeting Minutes
- VPP Pulling, Building, Running, Hacking And Pushing VPP Code
- VPP Pure L3 Between Namespaces With 32s
- VPP Pure L3 Container Networking
- VPP Pushing And Testing A Tag
- VPP Python API
- VPP-PythonVersionPolicy
- VPP-QuickTrexSetup
- VPP Random Hints And Kinks For KVM Usage
- VPP Release Plans Release Plan 16.09
- VPP Release Plans Release Plan 17.01
- VPP Release Plans Release Plan 17.04
- VPP Release Plans Release Plan 17.07
- VPP Release Plans Release Plan 17.10
- VPP Release Plans Release Plan 18.01
- VPP Release Plans Release Plan 18.04
- VPP Release Plans Release Plan 18.07
- VPP Release Plans Release Plan 18.10
- VPP Release Plans Release Plan 19.01
- VPP Release Plans Release Plan 19.04
- VPP Release Plans Release Plan 19.08
- VPP Release Plans Release Plan 20.01
- VPP Release Plans Release Plan 20.05
- VPP Release Plans Release Plan 20.09
- VPP Release Plans Release Plan 21.01
- VPP Release Plans Release Plan 21.06
- VPP Release Plans Release Plan 21.10
- VPP Release Plans Release Plan 22.02
- VPP Release Plans Release Plan 22.06
- VPP Release Plans Release Plan 22.10
- VPP Release Plans Release Plan 23.02
- VPP Release Plans Release Plan 23.06
- VPP Release Plans Release Plan 23.10
- VPP Release Plans Release Plan 24.02
- VPP Release Plans Release Plan 24.06
- VPP Release Plans Release Plan 24.10
- VPP Release Plans Release Plan 25.02
- VPP Release Plans Release Plan 25.06
- VPP Release Plans Release Plan 25.10
- VPP Release Plans Release Plan 26.02
- VPP Release Plans Release Plan 26.06
- VPP-RM
- VPP-SecurityGroups
- VPP Segment Routing For IPv6
- VPP Segment Routing For MPLS
- VPP Setting Up Your Dev Environment
- VPP-SNAT
- VPP Software Architecture
- VPP STN Testing
- VPP The VPP API
- VPP Training Events
- VPP-Troubleshooting
- VPP-Troubleshooting-BuildIssues
- VPP-Troubleshooting-Vagrant
- VPP Tutorial DPDK And MacSwap
- VPP Tutorial Routing And Switching
- VPP-Tutorials
- VPP Use VPP To Chain VMs Using Vhost User Interface
- VPP Use VPP To Connect VMs Using Vhost User Interface
- VPP Using mTCP User Mode TCP Stack With VPP
- VPP Using VPP As A VXLAN Tunnel Terminator
- VPP Using VPP In A Multi Thread Model
- VPP-VOM
- VPP VPP BFD Nexus
- VPP VPP Home Gateway
- VPP VPP WIKI DEPRECATED CONTENT
- VPP-VPPCommunicationsLibrary
- VPP-VPPConfig
- VPP What Is ODP4VPP
- VPP What Is VPP
- VPP Working Environments
- VPP Working With The 16.06 Throttle Branch