Virtual Ethernet Port Aggregator (VEPA) moves switching out of the server back to the physical network and makes all virtual machine traffic visible to the external network switch, freeing up server resources to support virtual machines.
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
Virtual Ethernet Port Aggregator (VEPA), which is part of the emerging IEEE 802.1Qbg standardization effort, is designed to reduce the complexities associated with highly virtualized deployments. A closer examination of the traditional approach using virtual switches and contrasting that to the VEPA approach will provide some insights into the deployment challenges and considerations around virtualization and networking.
When you have multiple virtual machines (each consisting of an operating system and applications) sitting on a hypervisor on a physical host, the VMs communicate with each other and to the outside world using a virtual switch or vswitch. The vswitch is, in effect, a Layer 2 switch, typically running within the hypervisor as software. Every hypervisor typically has a virtual switch built in. The capabilities of the virtual switch vary from hypervisor to hypervisor.
The virtual switch moves networking into the server realm, bringing with it the need to re-test, re-qualify and re-deploy traditional network based tools and solutions for the virtualized environment. This change can in some ways actually serve as an impediment to the rapid scaling and adoption of virtualization.
For example, traditional network and server operations teams have been distinct, each with their own processes, tools and realm of control. With networking moving into the server by way of the virtual switch, a simple task such as provisioning virtual machines can now require additional co-ordination between these teams to ensure that the virtual switch configuration stays consistent with the physical network configuration.
Troubleshooting and monitoring inter-VM communications becomes a challenge due to lack of visibility of inter-VM traffic in the network. And as the number of virtual machines on a single server scales from 8-12 VMs today to say 32-64 in the near future, the need to secure virtual machines within the server from each other, and from external threats, becomes a serious consideration.
From firewalls to IDS/IPS, traditional network based appliances and tools need to be re-developed, re-certified and re-deployed for these types of highly virtualized environments to ensure that inter-VM communication via the vswitch can meet compliance and security considerations.
The lack of standards around virtual switches further compounds this challenge adding to the overhead of training, interoperability, and management across different hypervisor technologies. In effect the physical server is becoming a full blown network environment with its own set of interoperability, deployment, certification and test considerations.
Interestingly, this trend runs counter to one of the premises of virtualization which is to efficiently utilize excess capacity within the servers for applications. The “network within the server” now risks becoming a significant consumer of that very same excess capacity, taxing CPU and memory resources.
The VEPA approach
Several different solutions are being proposed to address some of these challenges, including VEPA. VEPA is proposed as a promising alternative to the virtual switch; both in the standardization track, as well as by a broad set of industry vendors.
A VEPA in effect takes all the traffic generated from virtual machines on a server and moves it out to the external network switch. The external network switch in turn provides connectivity between the virtual machines on the same physical server as well as to the rest of the infrastructure.
This is accomplished by incorporating a new forwarding mode on the physical switch which allows traffic to “hairpin” back out the same port it came in on, to facilitate inter-VM communication on the same server.
The “hairpin” mode (or “reflective-relay” as it is also called) reflects a single copy of the packet back to the destination or target virtual machine on the server as and when needed. For broadcast or multicast traffic, the VEPA provides packet replication to each VM locally on the server.
Traditionally, this “hairpin” mode behavior was not supported by most network switches due to the possibility of causing loops and broadcast storms in a non-virtualized world. However, many network vendors are beginning to support this behavior to address virtual machine switching, using a simple software or firmware upgrade.
This behavior is also being standardized as part of the IEEE working group 802.1Qbg. A VEPA can be implemented on the server either in software as a thin layer in the hypervisor, or can be implemented in hardware in NIC cards, in which case it can be used in conjunction with PCIe I/O virtualization technologies such as SR-IOV. An example of a software based VEPA implementation is available in the Linux KVM hypervisor.
A VEPA in effect moves switching out of the server and back into the physical network and makes all virtual machine traffic visible to the external network switch. By moving virtual machine switching back into the physical network, a VEPA based approach makes existing network tools and processes work consistently across both virtualized and non-virtualized environments as well as across hypervisor technologies.
Network based appliances such as firewalls and IDS/IPS, as well as mature network switch functionality like Access Control Lists (ACLs), Quality of Service (QoS),and port mirroring, all become immediately available for VM traffic and inter-VM switching, thus reducing or eliminating the need to qualify, test and deploy costly new virtual network appliances.
Additionally, a VEPA brings network administrative control back to the network administrator, providing a single point of control for provisioning, monitoring, and troubleshooting all virtual machine related networking functions.
Offloading the network functions from the server to the network switch also has the benefit of freeing up server resources and making them available for applications, while providing the advantage of wire speed switching between both virtualized and non-virtualized servers; from 1Gbps to 10Gbps to 40Gbps and moving to 100Gbps.
As a consequence, the VEPA based approach has the promise of being able to scale up virtualization deployments, reduce complexity and cost, and speed up the adoption of virtualization.
Despite some of the advantages of a VEPA based approach, there may be some select environments where switching inter-VM traffic within the server may be desirable. For example, there may be environments where a physical server is heavily loaded with virtual machines that have significant inter-VM communication and it is desirable to keep the inter-VM traffic within the server to minimize latency.
In such scenarios, one possible approach may be to bypass the hypervisor based software virtual switch and leverage the switching capabilities that newer NICs are providing in hardware, based on upcoming I/O virtualization capabilities such as SR-IOV. Still, the operational complexity of such an approach along with the security and cost considerations need to be carefully weighed before fully operationalizing the “network within the server” model.
With server virtualization gaining broad adoption, complexities of switching traffic between virtual machines both within a server and across servers are increasing. A VEPA based approach to inter-VM switching provides an interesting and attractive alternative to the traditional virtual switch based approach. Standards efforts are underway to provide the capabilities needed in the network and server infrastructure to support VEPA.
About Extreme Networks: Extreme Networks provides converged Ethernet network infrastructures that support data, voice and video for enterprises and service providers. The company's network solutions feature high performance, high availability and scalable switching solutions that enable organizations to address real-world communications challenges and opportunities.