Martin Casado & team have been making waves in the SDN community lately - this time talking about the evolving demands that cloud software is placing on infrastructure. Nicira's recent proposal for the Open vSwitch Database Management Protocol (OVSDB) indicated that perhaps that OpenFlow is too low-level and that cloud software needs a higher-level interface when interacting with physical infrastructure.
No need to try to hide the elephant in the room here, clearly the immediate knee-jerk reaction to this has been concern over what this would mean to a still-fragile industry where support for OpenFlow and SDN has only this past year gained critical mass - yet with the pace of competition today, the entire industry has already jumped into the deep end and the industry has already been irreversibly changed by the momentum that SDN has gained.
My Take: I think the Nicira team is dead on in defining what they need from infrastructure with OVSDB ... but, I also don't think this has much of anything to do with OpenFlow or any method of directly controlling the physical infrastructure.
What keeps coming back to me on this issue is the presentation from Prof. Scott Shenker at the initial Open Networking Summit, "The Future of Networking, and the Past of Protocols" - still the #1 presentation I recommend for anyone interested in learning about SDN. Dr. Shenker bypassed the more temporal technical concerns and went straight to the heart of the issue observing that with networking we have "built an artifact, not a discipline", noting further that other fields within computer science are learned using scientific principles however networking still largely consists of "a big bag of protocols".
The main point I took from Dr. Shenker's presentation is that the field of networking had deviated from the best that computer science has had to offer, we lost our way. And I think that modern trends make it increasingly clear why this has become such an important issue - capitalism and globalization has increased industry competition to the point that we are bringing net new computer science from theory to discovery to application at a pace that is unprecedented in history - and this is not a temporary phenomenon, this is a new norm. With this being the case, we as an industry need to determine whether we are going to bring new innovation to light with proper scientific discipline and structure or if we are going to let it become a train wreck and try to sort things out later.
And to find the answer to this question we must consider that, the reason why the cloud exists in its current form today is because at the dawn of web 2.0, industry leaders didn't have any company they could buy software from to handle internet-scale demand ... so they turned largely to academia, and today we can see that this disciplined approach to innovation yielded a system that has created a fundamentally new paradigm for development and innovation, enabling us to continuously co-evolve products jointly with customers while gathering disciplined, quantitative metrics to drive the sometimes subconscious desires people have for the evolution of the computing experience.
So for me the entire SDN movement is at its core about bringing networking development in line with the best that science can offer, to not have to teach a 'big bag of protocols' but instead to create disciplined principles enabling networking to provide a rock-solid foundation for the growth of the future of computing.
And it is through this lens that we must consider how SDN, should continue to evolve. And I feel somewhat different about this today than I would have thought even a year ago, primarily because a year ago I was focused on driving SDN into the networking industry and a not paying as close of attention to the other silos that make up infrastructure. But, the evolution of networking hasn't happened at a random point in time - it has happened, for reason, at a time where the whole of infrastructure and software is evolving in ways that are equally as significant and important as those the SDN movement has brought to networking.
2012 was a seminal year for many things, but if there is one thing that rises above others for me it has been the finality and completeness with which the industry now views the 'software defined data center' (SDDC). While up to this point private cloud solutions had inexorably entangled cloud software with physical infrastructure, the industries cohesive vision of the SDDC has now become an autonomous unit, a complete data center that can accommodate thousands of applications each with unique topology requirements contained in a single logical container completely abstracted from hardware, capable of complete unfettered mobility - this is now complete in vision, and this vision completely abstracts software from hardware. And if there is one thing the SDN movement has cemented for me, it has been the critical importance of layering and abstraction.
Most of us are familiar at some level with the story of how the mainframe era came to an end and ushered in x86 and the open computing revolution. And a fundamental core component of that story is how a clean layer of abstraction can free innovation from the crippling limitations and complexity caused by lack of clean abstractions and well-defined interfaces. This clean abstraction enabled hardware and software to each evolve independently from each other, creating the paradigm that fueled the growth of modern computing to become the epicenter of the entire global economy.
This analogy has also been core to the SDN movement since its beginnings, and this has led many to speculate what will become to networking what Microsoft or Linux became to the x86 ecosystem. However, SDN is again not evolving in a vacuum, and this analogy can only be taken so far. Rather than considering simply what will become the operating system of the network, we need to consider what will become the operating system of the cloud, and even in the context of 'the cloud' analogies can only paint a partial picture as much of what we are developing today is fundamentally new. And one of those components is the concept of a master hardware abstraction layer ... a function that in our analogy was provided by a computer operating system, but, the cloud is different. Yet, the fundamental principle of clean abstraction hasn't changed one bit, and today with years of highly modular software development and services oriented architectures now in hindsight, my view is that while there are many different new abstractions manifesting in different areas, the interface between the whole of infrastructure and the whole of software needs a clean layer of abstraction, which in my view will ideally emanate from a coordinated set of converged infrastructure API's. And this means FIRMLY that the software layer can only make requests for the infrastructure to provide certain capabilities and can be very specific about what it wants infrastructure to do ... but, it is completely up to the infrastructure how it fulfills the request of software, whether it be using openflow or an entirely different control protocol, it is critical that we mantain a clean abstraction here.
Why a common converged infrastructure API rather than completely seperate API's for each silo? Some have speculated that compute, storage and networking infrastructure API's would evolve independent from one another, however I find this highly unlikely. While there will be separate interfaces and opportunities for BoB and heterogeneous infrastructure stacks, the reality is that the completeness of the software defined data center (SDDC) means that the while they will always exist, the traditional lines dividing the silos of compute, network and storage will begin to blur and fade. The SDDC means a common team of developers working together beyond traditional boundaries to solve the vexing challenges of the new private cloud. For enterprise IT administrators, A common interface and common toolset has emerged to manage across all of these silos and these new intelligent tools will free masses of engineers from lower level technical details enabling them to focus on cross-domain synergies and business-centric architectural features paving the way for a new set of best practices to begin to emerge for the new private cloud.
At the same time on the Infrastructure side, Dell's acquisition of Gale technologies shined the light on a new vision of how the whole of physical infrastructure could be managed in a continuous lifecycle model that mirrors in infrastructure the benefits that 'software-defined' methodologies have offered to software. While still in its infancy, this new vision of infrastructure management promises an operational model for infrastructure that brings a highly automated, agile continuous lifecycle to hardware that is synergistic and can symbiotically evolve along with software in a similar way that a well-designed, intelligent and dynamic abstraction empowered x86 and open computing to evolve and grow.
What does this mean for the SDN Movement?
Nothing to fear, everything to gain. For those that fear what new and increasingly specialized variants of SDN methodologies could mean to this new and still fragile movement, have no fear. SDN is new, and so for most of us we still haven't fully contemplated the tremendous breadth of what SDN really is. SDN and OpenFlow do not and have not ever represented one specific thing, but rather it represents an entire paradigm that can evolve the very beating heart of the entire global economy - how we connect with each other. This means the entire end-to-end enterprise network and all its components are just the tip of the iceberg, the SDN paradigm that OpenFlow represents will grow to encompass communications from sensor networks to satellites to wireless base stations and optical networks and on and on - not only every communication device that exists today but beyond to the masses of new technologies and connected devices that are emerging across the globe.
I don't doubt all those behind the SDN movement have always known this, so when I see the concept of the control for hypervisor network separating from the control protocol for physical infrastructure or new signalling methods emerging for software to communicate its needs to infrastructure, to me this is a positive sign that the continued growth and evolution of cloud networking and cloud computing is happening ... and I expect to see a lot more different variants of OpenFlow and SDN to continue to emerge. Whether we call them different versions of OpenFlow or some other name, It is clear that the control protocol used for hypervisor networking will have different needs from physical data center networking which will have different needs from campus networking, and service provider MAN and then WAN and optical and wireless and on and on.
Why? SDN is only here because at the dawn of the internet, to develop the new systems that could handle internet scale ... the network had to evolve ... and how it evolved was with the growth of linux, the open source and the software systems that power the internet and the cloud today. The very concept of SDN emerged from the growth of the cloud, it is more than just a word or a phrase, SDN is the logic and methodology of cloud computing applied to networking, SDN IS cloud networking ... and so as long as the 'cloud era' persists, SDN is here to stay.
So thats my take, but what do you think? Leave your comments and thoughts below and lets hash this out together ... or reach out to me on twitter @afewell, all good insight always comes from an open discussion and community effort so please chime in!