An interesting occurrence at this year’s Open Networking Summit was around the definition of Software-defined networking (SDN). At the opening session, ONS chair Guru Parulkar highlighted something he called "SDN washing" – which is when networking vendors essentially take their existing technologies and try to re-label them as SDN products in attempt to cash in on some of the hype and popularity that has arisen around OpenFlow and SDN. Since the first Open Networking Summit in 2011, the industry has seen no shortage of SDN washing; this was clearly most noticeable early on when the term SDN first became common. At that time, the initial response from most vendors seemed to be an attempt to demonstrate that each individually had a stranglehold on all things innovative, and that these academics pushing this new OpenFlow stuff really didn’t understand networking. According to most vendors at the time, ‘real SDN’ was whatever each vendor happened to have already been working on. But the folks at ONF held firm on their definition, and as time has passed vendors have accelerated their support for newer and more open and innovative efforts.
Dr. Parulkar laid down a clear definition for SDN and some clear positive benefits for the industry, network operators and consumers if open, standard frameworks win over proprietary and vendor-driven offerings.
And after the opening remarks, almost every presenter made a point to give a definition of SDN that was nearly identical to the ONF definition – that is until VMware’s Bruce Davie made it to the stage. Dr. Davie’s distinction, however, was a bit unexpected; he did not contest the SDN definition that the ONF gave, but instead proclaimed that SDN and network virtualization (via hypervisor overlays) are two entirely separate things. What Dr. Davie was referring to as ‘network virtualization’ involves recent advancements that enable server hypervisors to establish tunnels to each other, thereby creating a separate virtual network that is somewhat independent of the physical network upon which it happens to reside. All leading virtualization and cloud software providers are now incorporating hypervisor tunneling into their solutions, and this effort is similar too, but independent from the advancement of OpenFlow and SDN in physical networking hardware.
While some may see this distinction as splitting hairs, this is a significant distinction to be made - and for enterprises in particular, a little controversy to drive a point may be worthwhile. SDN was never just supposed to be about OpenFlow, but ‘SDN washing’ has occurred in many cases where switch vendors appeared to want to slow the pace of innovation in SDN to prolong the sales of previous generation product lines. This caused an increased focus on OpenFlow, specifically as THE example of SDN. ONS is not a little academic conference at Stanford anymore and now has tremendous global visibility. Its popularity has risen so quickly it has gained widespread attention in enterprise IT while much of the topics being discussed at the conference are still in the developmental stage well before these technologies are ready for enterprise consumption.
These factors have contributed to increasing confusion among enterprises as to when, how and where SDN benefits will be delivered. But for the enterprise, one thing is absolutely clear: The first wave of SDN value in the enterprise data center will be delivered via Network Virtualization Overlays. If you are in enterprise IT and are looking to take advantage of all the compelling benefits of SDN, don’t wait to hear what is happening with OpenFlow hardware or in the next wave of physical switches. If you haven’t started planning an overlay network for your virtualized environment, my advice would be to do it today…heck, if you could rewind time and start planning this last year, you should. The majority of value that enterprises can get from SDN today can be delivered through hypervisor overlay networks. I would go a step further and say the future evolution of SDN technologies cannot be delivered effectively until enterprises first roll out hypervisor overlay networks. Why? Because hypervisor overlay networks will fundamentally change the role of what the physical network does and the value it provides.
When VMware and server virtualization technologies started becoming popular over a decade ago, the traditional method of providing network services fundamentally changed. For servers, there was an immediate positive impact, but in the network, policy and service configuration has never been successfully adapted and is still optimized for the same methods used in the client server era. This has resulted in a hodge-podge of new competing ideas and a lot of bubble gum and band-aid approaches to providing network services to virtual servers.
Prior to server virtualization, there wasn’t such a thing as vmotion. Once a server was configured it remained relatively static, making it easy to associate network policy with a physical port on a physical switch. And from the beginning of the client-server era, this was how networks were configured: a network engineer figured out the static application profile of the server, then put a static network configuration on the physical switch port(s) the server was connected to. The network didn’t have any understanding of what was actually connected to its port, but would apply the configured policy to whatever traffic was received through this port. The physical switch a server directly connects to, known as the ‘access layer’ (or access switch), has always had a special purpose – this was THE location at which value-added services provided by the network were applied. The access switch can change bits in each of the packets of data to classify how other networking devices in the path between a server and a client should treat the traffic. This pattern placed added significance to the access switch – much of the value networking vendors have been working to provide to keep prices and margins high in networking switches was around services provided at this interface.
However, when server virtualization is used, it fundamentally changes the relationship between the server and the physical access switch. First, virtualization increased the quantity of applications that generally ran on a physical server. In the past, it was common to see a 1:1 application-to-server ratio. Today, it is common to see a 20:1 ratio of different virtual machines running on a single physical server. Because physical networking devices are even today still optimized to provide network services based on a physical switch port, traditional switches are incapable of intelligently distinguishing which traffic is being sent by which VM.
Next, and perhaps more importantly, virtual machines are conducive to dynamic migration, so not only is there now a large number of logical servers connected to a single physical switch port, these servers also do not remain static – another thing traditional networking devices are completely incapable of dealing with. So today the network team needs to open a trouble ticket to manually move the configuration associated with a virutal machine to the physical switch ports the new physical server is conncted to in order to move a virtual machine. Now all of that gets REALLY complicated … for example when moving the policy in the above example, if we are moving the virtual machine to another physical server that already has 19 other VM’s on it, the policy that needs to be moved would need to be rationalized with the policy that already exists on that switch port which may not even be possible.
The result in my experience has been a significant proliferation of one-off approaches to application deployments combined with a significant dumbing down of network service provisioning in virtualized environments. Today if you look in a textbook, the best practices for configuring the data center network are identical to what they have been since the beginning of the client server era. First when preparing to deploy an application: figure out what network services are needed by the application, configure an access list to identify different traffic classes and then mark or block each packet so it gets appropriate treatment. However due to the immense complexity and the critical need for agility, very few enterprise IT departments even attempt to do this for the vast majority of applications. Server virtualization brought with it massive improvements in speed, agility, operational streamlining and beyond … but the network hasn’t been ready for that change.
As a result, in most environments the network is providing a small fraction of the value-added services that are used to justify the cost premiums of many network switching platforms. Instead traditional network vendors had been standing in the way of a solution to this problem by continuing to prop up legacy ideals of how physical networks should be configured and what functions they should provide. In practice most of value-added network services have been dumbed down to simply using vlan based segmentation between traffic classes and even for this simple requirement, traditional networking still fails miserably as switches from leading vendors are generally still completely unable to dynamically update vlan requirements to support an automated virtual machine migration.
Server virtualization did however come with a solution to this problem - the hypervisor switch – a place where networking service policies can be configured easily during application provisioning and remain associated with the virtual machine when it moves. But this approach has been mired in political battles and obfuscated by years of attempts by vendors for alternative approaches aimed at unnaturally keeping all network services within physical networking devices. Whether that made sense or not was secondary to leading networking vendors whose primary objective was making their own products as relevant and as margin-rich as possible. Now to be fair, especially in the early days of server virtualization most were focused on stuffing as many VM’s as possible into a server and did not want to share CPU and Memory for networking requirements. While there is truth to this – this is a solvable dilemma that has already been solved for practical purposes today. But there has been no shortage of competing methodologies that have added to the confusion around delivering network services in virtualized environments, and today there is still competing messaging around whether to use hypervisor switches or used specialized NICs or specialized methods to provision per-vm policy in the physical access switch.
Today however there should be no more confusion around this issue, it is very clear the hypervisor switch is the natural and best fit for provisioning application-related network policy & services. Hypervisor switches are rapidly growing to better support the few remaining access-layer capabilities they don’t yet support well. And while there still may be competing ideas there is one primary factor that has sealed the nail in the coffin of competing methodologies: the cloud.
Another thing we saw a lot of at Open Networking Summit was the rising maturity of Infrastructure-as-a-service (IaaS), which is rapidly growing to support complex enterprise requirements and native layer-2 connectivity options that can make cloud-based infrastructure act just like infrastructure on the premise. The new wave of enterprise-ready IaaS services now becoming available are attractive not only from a cost standpoint, but in most cases offer far more agile management tools and unique capabilities most enterprise IT environments cannot yet deliver. It is astoundingly clear that enterprise IT needs to start delivering comparable services to cloud providers at a low price point and also have an intelligent hybrid cloud strategy so that IT can provide governance and control of cloud services for the enterprise.