Over the past 15 years the computing industry has been revolutionized by the hypervisor as virtualization technology has grown to become nearly ubiquitous in enterprise server deployments. While the hypervisor was focused on virtualizing the server, the process of doing so meant that it would have to aggregate the network connectivity of multiple virtual machines that shared a single network interface card, and thus the virtual switch was born.
Prior to the growth of the hypervisor, networking companies decided that simply punting packets around was too trivial a role for the network, which had been compared to simple "plumbing". Network visionaries from companies like Cisco Systems envisioned a world where the network solved all sorts of problems on behalf of servers - why should a server have to worry about information it doesn't know when it is connected to a network that is connected to ... everything! The location of the network and its connectedness to all devices could promise a greater and more lucrative role for network technology, keeping it closer to the minds of check-signing executives ... servers could simply blindly punt their packets into the network which could look into the packet like a seer into their crystal ball and then automatically do whatever the application needed.
Thus began the movement of the dumb network switch toward a robust appliance that could offer services to the application from out-of-band security to quality of service to a future that could include numerous value-added services, namespace networking and beyond. While the vision invokes sugar plums and fairy tales, the reality of how this is implemented ... doesn't. To deliver on this vision the network would have to grow beyond it's packet-punting roots to become capable of delivering application policy - and the reality of this in practice has meant simply using access lists to filter and mark packets based primarily on packet header information.
In perhaps the most common example of application services delivered by the network switch, the use of an access-list to ensure an application can only communicate on the network ports it's supposed to can offer an additional layer of security - with the key benefit that a compromised host could do nothing to compromise the network that stood in between a hacker and the wealth of corporate assets. And the same access list could also be used to identify the network performance requirements for that application and then why not other services too?
Network visionaries planned for a world where every new application would live on a server that connected to a switchport, and this handoff would be the lynchpin for the network to learn about the application and deliver value-added services. The traditional best practice when deploying a network "properly" is, when it comes time to deploy a new application, the network administrator would go and learn about what network characteristics the application would need and craft an access list and QoS policy customized for the application. In my experience, I have rarely if ever seen an organization (outside of limited, highly-specialized deployments) that has successfully implemented and managed this vision where every single application gets its own set of custom access-lists - in most organizations the cumbersome nature of this model has resulted in it being delivered only to the most sensitive and public-facing applications and even then is generally delivered at points of network aggregation or via firewalls at network access points rather than at the ingress switch - a change that severely limits the utopian vision of the application-centric network. I should note some newer implementations propose the use of templates to make this model more manageable, but in my opinion it is still a fundamentally broken model that requires skilled workers to manually craft static information into a template that can be automatically applied, simplifying the management of this function but not improving at all on the underlying methodology and its inherent weaknesses and limitations.
The introduction of the virtual switch imposed a fundamental challenge to the network industry network by driving a vswitch shaped wedge in between the application and the physical network. And 15 years later there is still no industry-wide norms settled on for access-layer network policy. In the network incumbents' vision of a network centric world, the access-switch, the first switch that directly connects to a server - is a critical component. For the vision to work, the very first switch would take the time to learn more about the application and change bits in each data packet so that other network devices could easily identify & enforce needed services. And with the introduction of the hypervisor, the seemingly innocuous vswitch stole the critical role of 'first switch' away from the physical network.
Once a hypervisor switch was in place, it fundamentally changed the ability for the network to provide application policies the way it had once intended. One key example is the inherent trust relationship that exists between a physical server and the network. If an application runs directly on top of a non-virtualized server, this allows the network policy to be associated with the switchport the server's nic is physically plugged into - one thing that a hacker controlling a compromised host could never change. But with the virtual switch in place, the physical network would now have to apply its policies based on information inside of packets sent by the virtual machine, such as the source IP or mac addresses, rather than the trusted switchport - a critical blow to the physical network as the hacker controlling a compromised host could easily change these identifiers thereby derailing network security.
When hypervisors were new, enterprise network engineers saw this as not that big of a deal as nobody was ever really able to implement per-application services & network security anyway and virtualization seemed conducive to focusing security policy around network aggregation and access points. And for QOS, who cares, that is what they make bigger pipes for right? However the vswitch stood directly in the way of being able to deliver the networking incumbents' utopian networking vision that, though rarely used in practice, was the hook used to lure customer executives into paying exorbitant margins for networking equipment.
While methods to allow the network to bypass the hypervisor have been proposed, there is one primary reason why the virtual switch seems to have effectively trumped these alternative methodologies: the cloud. While this battle was being played out around the network access layer, each leading vendor was also working on another bigger battle - the fight between vendors over who will become dominant in the cloud. And specifically this focused around one key problem - if a specific piece of hardware is needed to deliver critical functionality, then the virtual machine using that function could only be transferred to an identical hardware environment - an ideal lock-in tactic for vendors but a restriction that application developers and IT consumers clearly do not want. So the resultant compromise allowed complete access to the virtualization suite API's enabling those with a more hardware-centric vision to develop 3rd party extensions to the hypervisor.
Another newer key attribute of hypervisor switches is ability to tunnel directly between hypervisors providing the makings for a completely virtualized network that can become portable with the application if developed without hardware contingencies as VMware is doing with their Networking and Security suite. This is going to become a critical capability as increasingly applications are no longer defined by a single server but instead are now frequently composed of different services that spread a single application out across multiple servers over a network fabric. This takes the portability requirement to a new level as migrating the virtual machine alone is not sufficient for newer distributed applications. However network virtualization with technologies like VXLAN/ NVGRE/NVO3 can enable a distributed application cluster with specific network requirements to become as easily portable as a single virtual machine.
These factors along with a host of other features attractive to application and platform developers have essentially now cemented the place of the virtual switch in the server access-layer and the battle is now focused on which vendors hypervisor switch a customer may choose. In one corner we find VMware fresh off the heels of its acquisition of Nicira and in the other corner we find Cisco who has now doubled down on its Nexus 1000v.
The real key to deciding between these two contenders should have little to do with the virtual switch itself but instead focus on each solutions required hardware and the suite of value-added services that each vendors ecosystem will provide. VMware's vDs will ensure access and compatibility with host of vCloud and NSX capabilities whereas the use of the Nexus 1000v limits access to vCloud features but provides access & integration to alternative features and services being built by Cisco. While this story is still being played out it appears to me that the VMware side will be more focused on their vision of the software-defined data center using general purpose hardware with new software techniques that will result in a highly-portable solution. Alternatively as Cisco builds out their vision I anticipate a greater focus on leveraging Cisco network hardware building onto earlier investments like OTV - and even potentially Cinsiemi.
While settling on the role of the virtual switch was an important step - it was very much a first step as it can simply enable software like the Nexus1000v to do the same things in essentially the same way that Cisco was trying to do before the rise of virtualization - a collection of customized per-application access-lists that would now have the added benefit of template-based provisioning. Alternatively it may also open the doorway to entirely fresh approaches to solving networking challenges - perhaps by leveraging capabilities from applications or other domains of infrastructure or a combination, all coming together in a more cohesive and integrated way - something VMware seems to have a particularly robust capability to drive. As evidence of a fresh approach we need to look no further than the very basis of VMware's standard vswitch - it is notably not a learning switch - it has no need to dynamically learn mac addresses because the virtualization system already knows exactly which hosts will be connected to the vswitch, it already knows vm addresses and a whole host of other information about the application and the environment in which it lives. The key goal of the application-centric vision of the future of networking is to enable the network to know more about the application and without the vswitch the network was reliant on examining data packets to learn about the application, but the virtualization suite brings with it a host of application metadata and platform-level capabilities that offer a new avenue to drive greater application/network integration than was possible in earlier models bringing a lot of fascinating and powerful possibilities into arms reach.
The battle around network virtualization is still in its infancy but this will become a key battleground that will impact enterprise IT strategy from the physical equipment all the way into the cloud. I will go into some more depth into some of the underlying technologies in a follow-on post but it is still early and I will be excited to watch these emerging technologies as they continue to mature and hopefully grow to offer applications and networks with fundamentally new types of services- game on.