Cisco VTF is a Cisco
Soft switch built on the Cisco Vector Packet Processing (VPP) technology.
The VPP platform is
an extensible framework that provides extremely productive and quality
switch/router functionality. It is the open source version of the Cisco VPP
technology, which is a high performance, packet-processing stack that can run
on commodity CPUs.
The benefits of VPP
are its high performance, proven technology, modularity, flexibility, and rich
The VPP platform is
built on a packet-processing graph. This modular approach allows anyone to
plugin new graph nodes. This makes extensibility rather simple, and the plugins
can be customized for specific purposes.
The VPP platform
grabs all available packets from RX rings to form a vector of packets. A
packet-processing graph is applied, node by node (including plugins) to the
entire packet vector. Graph nodes are small and modular, and loosely coupled
which makes it easy to include new graph nodes and rewire existing graph nodes.
A plugin can
introduce new graph nodes or rearrange the packet-processing graph. You can
also build a plugin independent from the VPP source and consider it as an
independent component. A plugin can be installed by adding it to a plugin
VTF uses remote
plugin that binds into VPP using VPFA (VPF agent). The VPFA interacts with VPP
application using low level API. The VPFA exposes netconf or yang based API for
remote devices to program the VTF through the VPFA.
vhost is a solution
that allows the user space process to share a number of virtqueues directly
with a Kernel driver. The transport mechanism in this case is the ability of
the kernel side to access the user space application memory, and a number of
ioeventfds and irqfds to serve as the kick mechanism. A QEMU guest uses an
emulated PCI device, as the control plane is still handled by QEMU. However
once a virtqueue has been set up, the QEMU guest will use the vhost API to pass
direct control of a virtqueue to a Kernel driver.
In this model, a
vhost_net driver directly passes the guest network traffic to a TUN device
directly from the Kernel side, improving performance significantly.
In the above
implementation, the guest NFV application directly writes packets into the TX
rings, which is shared through a common vhost socket as the RX ring on the VPP
. The VPP grabs these packets from the RX ring buffer and forwards the packets
using the vector graphs it maintains.