lundi 1 janvier 2018

Acceleration technics for Data Intensive VNF


Recently I was leading the acceleration technics from virtualization point of view and how they can satisfy the need for speed requested by data intensive VNFs.

Although virtualization brings flexibility, resource management and scalability, it adds an overhead of resource represented by the amount of dedicated resource to run an hypervisor.

Virtualization introduce also the OVS component: Open virtual switch. This component is responsible of network bridging and routing in the virtualization domain.

Historically, Openstack framework was built for IT and web application. Then it was adapted by Telecom vendor as the Telco cloud framework to implement the NFV shift in the telecom industry.

OVS component is not a carrier grade component and represents a bottleneck point for data intensive VNFs.

Understanding the Data path in a virtualization environment.

Referring to VMWare technology, the below figure represents the data path in a virtualization environment.



Its important to mention that the speed of network is determined by the its slowest path.

The data path in the virtualization relies on may compute resources:

·       Network interfaces NIC

·       Processors (CPU)

·       Memory

·       Buses

The Network path consists of:

    • pNIC physical NIC
    • A process which transports traffic from pNIC to the VM Rx thread
    • A process that sends traffic from the VM to the network Tx thread

The above representation is describing the network path in VMware environment. To accelerate network traffic in VMware environment you should basically dedicate physical core to VMXNET3 process. This is the only way for the moment waiting for VMware to deliver more options in the next release of their Exsi hypervisor. Of course, SR-IOV might be an option bypassing the hypervisor layer. Something that is not recommended at all from VMware architects.

In an Openstack environment, OVS is the bottleneck component for data intensive VNF.

Standard OVS  is built out of three main components:

  • ovs-vswitchd : a user-space daemon that implements the switch logic
  •  kernel module (fast path) : that processes received frames based on a lookup table in kernel space
  • ovsdb-server : a database server that ovs-vswitchd queries to obtain its configuration.

OVS has several ports:

  • Outbound ports which are connected to the physical NICs on the host using kernel device drivers,
  • Inbound ports which are connected to VMs. The VM guest operating system (OS) is presented with vNICs using the well known [VirtIO] paravirtualized network driver.
 OVS was never designed with NFV in mind and does not meet some of the requirements we are starting to see from VNFs.

There are many acceleration technics in a virtualized environment:

·       PCI pass-through

·       SR-IOV Single Root I/O Virtualization

·       OVS-DPDK

·       SmartNIC



Each technology has its own advantages and uses cases.

SR-IOV is the most deployed technology in production today.

The most drawback of SR-IOV is that it bypass the virtualization layer. This push VNF and Vim vendor to enhance development and deployment to OVS-DPDK.

Below is a summary of the different main technics. 
  

And the table below summarizes pros and cons of each 3 technologies:








Aucun commentaire:

Enregistrer un commentaire