Martin Casado is not your average executive, nor is he your run-of the mill computer scientist. First, his graduate research led to the creation of OpenFlow and the massive transformational revolution it is creating across all domains of networking. Not content with one major revolution, Casado's work led his team to revolutionize the way networking is done in the hypervisor with their open vSwitch.
While networking is not as center-stage as cloud computing, from my view Mr. Casado looks a lot like the Larry Page or Sergey Brin of the networking industry. As 'the cloud' takes shape, we have seen a new guard of technology-savvy executives and business-savvy technologists that 'get' the cloud and are laying the groundwork for the new era. Equal parts strategist and scientist, if looking for the direction the networking industry will evolve in the Cloud era, I cant think of anyone who has had a greater impact than Martin Casado.
While there may have been some debate prior, VMware's Nicira acquisition solidifies the new prominence of hypervisor networking. While this area has been neglected, this fall it will become the centerpiece of enterprise infrastructure with Microsoft gearing up to launch a major attack on VMware with the launch of Windows Server 2012.
And Microsoft ain't joking with the network virtualization stack it will be including with the upcoming launch. I had the chance to take a preview and have been duly impressed with its ability to virtualize complex topologies and provide a robust framework for virtual service integration. Both of these software titans are now poised to give networking software and automation a LONG-overdue facelift. By this time next year we will all be up to our necks in advanced hypervisor networking ... its going to hit the industry like a bat outta hell.
But how will the hypervisor network take shape? How will it affect the future of L4-7 services in the Data Center? How will it affect the future of the physical network? These are themes of my conversation with Martin below.
Art Fewell: In many enterprises today hypervisor networking hasn't been a central point of focus. It seems VMware has been hesitant to take on Cisco for providing the rich network services the network access layer is uniquely positioned to provide. While they haven't been aggressive, the way I see it, if hypervisor vendors want it, the hypervisor network is their space to take. And when I saw this acquisition I thought ... 'okay, VMware is really serious about this space.'
Martin Casado: I think it's very clear that we're entering a world with two types of networks. You've got the physical network, which is really solving the problem of how you move packets between point A and point B in a complex graph, whether that graph is a fabric or it's a backbone. And that's a physical networking problem that requires boxes and wires and routing protocols. And this is something that the traditional vendors are fantastic at.
But now we've got this virtual network, which is its own layer. It's like a layer at the axis on the edge of the network, which provides what looks like a physical network, but it has all the operational properties of a VM. You can create them dynamically, you can put them anywhere, you can snapshot them, and you can rewind them.
And Nicira is a leader, of course, in the virtual networking space, and VMware has been pioneering the virtual networking space in its own environments. So a marriage like this allows us to provide unified solutions for multiple hypervisors for the virtual networking portion. But of course it's not solving the physical networking problem. You still have to build up physical networks, But it is introducing this new virtual concept.
Art Fewell: It definitely seems now that it makes the most sense to execute with the hypervisor layer. OVS has already been critical. I think David Ward actually pointed this out really well in his presentation at the first ONS, talking about how for years application developers have been circumventing and using different tricks to get around the network. I always think of things like Microsoft Lync's codecs, and their ability to dynamically adjust to network conditions. They're out there guessing, constantly trying to anticipate and guess what the behavior of the network is going to be. That's one of many different examples of how different applications have almost been forced to use a bubble gum and bandaid's patchwork as the network hasn't been providing these services. Not that it's not good technique, but in the ideal world it would be nice if the applications could just say, "Hey, Network, can you tell me what the condition of the network is?" or "Can you reserve resources for me?"
Martin Casado: Yeah, I actually think the real Nirvana, the Shangri-La, is for the applications to be totally oblivious to the network. If I could have my wish, I would be like, applications want to communicate and they'll pop up and they'll start communicating. And if there is available bandwidth, it will be consumed.
Today we kind of have the worst of both worlds. The network is often partitioned or has bottlenecks - and a lot of these are imposed by choke points that are put in place because we have to configure the networks by hand, and we configure them at these choke points. Then we filter traffic through these choke points with the operation we've configured. So because we have the substandard fabric, we have networks that are over-subscribed; we have issues with them; and information isn't available so the applications have to get at it. So I think that we will see the industry moving from worst towards best.
The worst is where we are right now, where an application just has to guess by probing. I think a little bit better than that is to get more information from the network so the application actually has some real visibility. But I think the best is when the applications don't worry about it at all. They would just worry about communicating and not somehow degrading their performance.
I think that the way that we get to this perfect place is if you remove all of the manual configuration state, and all of the policy states in the networks, and you actually build good fabrics. You've seen me write about this and I know you have written about this as well. You look at the problem, you build up your physical network in a way that is redundant, that doesn't have choke points, and then you have many less problems to worry about in the physical network. And you should have pretty much whole cross-sectional bandwidth no matter where the communication goes.
Art Fewell: I think it makes a tremendous amount of sense, and I really enjoyed your paper talking about the separation of the virtual and physical networks. It's very apparent and I think it's going to really help the physical network to evolve where it needs to. Because when it was trying to grapple all application/network challenges, especially through the cookie-cutter approach that the whole traditional industry moves in, it just doesnt seem like the traditional approach will everget where it needs to. Some of the other challenges are that the demands of the private cloud are going to increase network demands by orders of magnitude. We've seen the networking industries' one-application-at-a-time approach to QoS, and its about time our approach to quality of service gets modernized. And it seems now when I look at what's happening with private cloud, we have a cloud controller that wants to move workloads and optimize workloads to create the maximum possible resource utilization. And to deliver that type of cloud elasticity, your controlling platform has to have insight and awareness of what the application's network and performance needs are, and also what each physical host's I/O utilization is -- so the controller can optimize resource utilization.
In the new private cloud we may end up with tier 1 applications on the same physical server as a tier 4 application. And maybe the priority-one application has very low sensitivity to network latency. And maybe the priority-four application is very sensitive - you have all these combinations in these environments, and it really spells to me that we need to have something like a QOS model for every single application. And I really don't see how that would even remotely, operationally, be feasible with the legacy traditional approach to network services.
Martin Casado: This is exactly right. If you look again at the way things are done today, it makes it impossible to build an efficient cloud. If you think about the physical network because of things like VLAN placements, you are limited on where you can place workloads. So even without thinking about the application at all, there are limits on where you can place a VM because of capacity issues or because of VLAN placement issues. And then on top of that, if you put on constraints based on the application - for example, if you got something that is tightly clustered and you want to have low latency requiring physical proximity and/or high bandwidth requirements - you're solving a very difficult constraint satisfaction problem.
You've got all these very difficult constraints when you're doing placement. So one of two things happen ... either you have a very inefficient cluster where you're building out a bunch of physical networks that are grossly under-utilized, or you're not going to be able to sufficiently address the constraints. You're not going to be able to actually get optimality within the application. And the great thing about network virtualization is with an optimal physical network, it's one big fabric. You can place these things where you want and you do distribute to us at the edge to ensure that the application can be optimized for whatever it needs to do. And so that's what we're trying to do, we're trying to move from this very Balkanized view of networking, view of the world, where you do have placement constraints and you do have configuration requirements, to one where you actually can treat the physical network as a pool of capacity.
Art Fewell: Let me ask you, one of the impacts that I see becoming really increasingly apparent -- if you look at hypervisors in the Enterprise what I typically see is VMware came and brought its ability to do agile deployment. Now in the Cisco world it was always the best practice to say, I now have a new application, I'm going to send the networking team in. They're going to go through the manual and figure out what ports that it needs to operate on, what are the IP addresses and hosts that it needs to communicate, and they're going to go write access lists for security and to optimize performance and ultimately best practices that happen on an application by application basis. Now I don't know how many enterprises ever actually really did that for the majority of applications, but it seems like their message today is pretty much the same thing, slightly modernized ... now instead of doing the same old thing for each application, now they do it for templates. Either way, the old 'best practices' do not seem like they could ever empower the future vision of the software-defined data center. It's really going to have to evolve to the state where applications dynamically do an interaction with the orchestration tools that communicate their requirements through API. So there's an API application saying, "Here, Network, here's my security requirements, my performance requirements. Can I reserve these resources?" And really, that - instead of becoming a very static and manual thing, it becomes a really dynamic application-driven interaction.
Martin Casado: Exactly right.
Art Fewell: In my experience, in most cases you have VMware customers that are using the standard vSwitch and not upgrading to full vDs. A lot of them haven't been that eager to upgrade to enterprise-plus licensing in the past. You've had Cisco coming in pushing the traditional Cisco best-practices for network services, but from what I've seen the VM-ware administrators typically keep it as light as possible to avoid having to open network change tickets. And then you've got Cisco coming in and saying, 'Hey, you have to preserve the traditional access layer. You need VN-tag, or you need the Nexus 1000v.' But again from what I've seen, I haven't seen the 'preserve the traditional access-layer' as being very popular with hypervisor admins. But with the introduction of VXLAN, and now the acquisition of Nicira, it seems like I see the hypervisor market really emerging as the new darling of data center networking. Now the industry has long been speculating about operational silos and what roles will emerge in the new data center. Cisco has tried to raise the influence and control of networking teams but it seems clear that the centerpiece is really about the server and application. I see server and hypervisor administrators as the team that is going to largely be taking control of that functionality as a way to get past the slowness that came from having to incorporate a lot of different silos in application development, deployment and maintenance. Do you see the market evolving similar to that, with hypervisor really becoming very distinct from the traditional networking field, and different players, consumers, administrators and so on?
Martin Casado: Let me just talk first about the notion of the hypervisor being the access layer to the network. When I first got into this, which was five years ago, it wasn't clear at all whether the access layer, the first hop intelligence was going to be on x86 or on a switching asic. If you go forward five years later, it's almost certain that it's going to happen on x86. It's just that both the technology and the economics make more sense that way. So from the technology standpoint, if you have two VMs communicating to each other, the mem copy that you do on the x86 is going to be far faster than doing any sort of DMA-ing through a DMA engine turned into a switching path. Also switching asics are kind of basically limited and you don't need really high aggregation on the server. So it doesn't make sense.
If you look at it from the perspective of a single server, you can do pretty much everything that you need to for virtual networking without doing any sort of special ASIC offload. And the reason this is - from the perspective of a single server - you don't have to have cross-sectional bandwidth of 10 times 48 Gigs. So it's almost certain to me, and I strongly believe this, that the first hop doing networking intelligence is going to be on x86 and the vSwitch. And whether that vSswitch is owned by Cisco or by Microsoft or VMware or Red Hat or whomever, it is almost certainly going to be x86.
The second thing is, let's assume that what I just said was true, which is, if you look at the evolution of networking that the x86 is now the first hop of network intelligence. Then you've got the interesting question which is, within the customer environment, who owns and who controls this new network? And you can see a couple of tabs, not that I've got this from experience - I don't think we know the answer to that yet. In one world, the networking guys focus on building good physical networks. And they don't focus on all of the stuff that happens on the server. So, say you have a virtual data center that you can do virtual networking. Well, the networking guys build great physical fabrics for whatever gear they want to. And the goal of the physical fabrics is to be very quick and very simple to build out. And then the virtual networking piece becomes a piece of software and becomes a piece of application provisioning. So just as you said, when your application comes out, it has whatever interaction it needs at the virtual networking layer and everything is totally automated. You don't require a human being in the loop. So that's one way that this could play out.
Another way that it could play out on the field is if the networking guys actually have some interaction with or have some purview over the x86. So for example, they could be part of determining what a virtual network looks like and what sort of security policies it should get. Like default when a VM spins up, and what kind of technologies should be used to integrate these virtual networks with the physical network. And I think that this kind of scoping out of territories is still very much under discussion and playing out as we're going.
And for the third thing I'm going to give you two bits of color on that. We have accounts in which the cloud teams will actually take over a network entirely. So they'll actually dictate what the physical hardware looks like, and of course they control all the software. And we have other accounts in which the networking guys specify very specifically what the policies and technologies are used in the virtual networks. And I think that right now you see things across the board and it's anybody's guess where this will converge.
Art Fewell: A lot of enterprises have been focusing heavily on virtualizing traditional enterprise applications and may have had limited insight into whats been happening behind the scenes with XaaS development. For a lot of newer web apps, many enterprises are using ASPs, or they're using XaaS or what have you. And from what I have seen, when I go and visit networking departments; lots of times I don't see a lot of awareness of what is happening with the latest in cloud applications, especially web-based customer-facing applications that are more strategic to the business. It seems there is often not a great deal of awareness of how much application development has morphed with modern distributed computing. And while we often still think "I'm the network guy. I can go with a sniffer up there to help them debug at the packet level" the reality is that there's thousands of developers who are now much more capable of debugging complex application streams over the fabric. Given that newer distributed applications send much more complex communications over the fabric than in the past, I don't think the skill set of the average networking guy is right to do network-level troubleshooting and analysis as today this requires extremely deep knowledge of the inner workings of an application.
Martin Casado: This is a very important point. This is an area where you will start to see virtual networking shine. Like you said, if you go and you look at a packet today, that packet tells you very little about what's going on. You don't really know who sent it. You don't really know where it's going to. You don't have any higher level semantics. You just have IP addresses and ports, which are effectively meaningless from end-to-end. A port doesn't necessarily mean an application. An IP address collected yesterday could have been reassigned to a new host. Mac addresses can be overlapping. So it's very difficult to reconstruct something meaningful, given a packet trace.
When you have virtual networking solutions like what we've been doing in the Nicira or what VMware is working on, all of the information that you need to reconstruct what's going on is already maintained by the system, because you have to maintain it in order to build a virtual network solution. So imagine, if you will, that you have a virtual networking system in place and it's collecting all of this debugging information and being stored in a database somewhere. And then you can packet trace, and while you're looking at the packet trace, you can correlate it with this database, and it will tell you the stuff that you're actually interested in.
It will say, "This packet was sent at this time, from this VM to this VM. The policy of the virtual network at the time looks like this." And then you can even ask questions about when did that VM come, did it go, and who was logged on to that VM. And so we need to move away from this very low-level problem of looking at packet headers to high level questions, such as, who actually sent this? Where was it going? What did the network look like at the time that this happened? And so, if you'll bear an analogy, if you think about programming on a computer, the lowest level thing you can do is look through memory to see what the program is doing, but that - just memory addresses - is very difficult to reconstruct which part of the program is at which memory address and what is going on. But if you use a debugger, it will tell you the symbol and it will reconstruct the context. And so we'll have enough information now to reconstruct the context to get these low-level packet traces.
And so I'm not sure that I would agree that network operators aren't capable of doing this type of debugging, I just think that the tool sets haven't evolved enough to help them to do it. And so what we're going to see is a proliferation of tools which you can feed in, that work level captures. And they're going to spit out high-level and very interesting events. And this is all part of this virtual networking revolution.
Art Fewell: I definitely anticipate so. And I think one of the big things with VMware, being the acquirer, is we can anticipate seeing a lot of these tools emerge as part of VMware's own toolset and partner ecosystem. I find that to be a good thing for the industry. So OVS is not necessarily dependent, it doesn't have to be dependent on OpenFlow. What do you think about how OpenFlow is going to play into the hypervisor network and the future of OVS?
Martin Casado: Let me talk about Open vSwitch first. We are absolutely committed to continuing to develop Open vSwitch and even accelerate the development. So we will have as many guys on it, maybe even more, we're going to continue to port it into many different platforms. It's going to continue to be 100 percent open and a solution for anybody to use. We're very committed to that. VMware is committed to that. As part of OpenFlow support and Open vSwitch, right now there is a tremendous amount of work going on for OpenFlow 1.3 to support, and this is coming from numerous organizations, including Nicira. You can expect in the next couple of months for that to be complete. We'll have full OpenFlow 1.3 support. Open vSwitch will be available, it will be ported to multiple platforms. It will support OpenFlow 1.3, and it will be open. That is for sure.
Now in my experience, OpenFlow isn't 100 percent suitable right now to solve the full virtual networking problem. There are things that are required that it wasn't built to do. And when OpenFlow was created, it was focused on controlling hardware forwarding pipelines. Nicira has the founding team that created OpenFlow, and it really hasn't gone far enough from those roots to be a hundred percent applicable to the problem of networking within the hypervisor. And so my guess is, going forward there is going to have to be some extensions to OpenFlow or some new protocol, which is better suited for soft-switching at the edge. And at this point I can't really speculate what that will look like, but I do believe something like that will arrive.
Art Fewell: I definitely agree with what you said in your paper. And I have a follow-up. The real big significance here, something I think that is fairly obvious is, Wall Street pushes everybody to grow, grow, grow. So Cisco didn't have much of a choice but to try to interject their strength in networking to be the centerpiece of the private cloud. But the way I view it, from the perspective of private cloud, networking is a component, the same as CPU resources are a component, and so on.
So when I think about what we're trying to do from the private cloud perspective, we want to look at CPU, we want to look at storage, we want to look at memory, and we want to look at network IO, and make those as high as possible to maximize our efficiency as a key goal. So to me that really speaks to - within the context of the fabric that connects a cloud container together - If we want to be able to maximize utilization across the fabric, it's really going to require a tight coupling to start to emerge from the application space.
A lot of people have said that hypervisor networking takes physical switches and it dumbs them down a lot. There is kind of a commodity aspect because I would anticipate VMware will set their own standards for integration with the physical fabric, but it does seem to me that the requirements to really optimize the private cloud are in some ways going to be more demanding than anything we've seen in mainstream networking - ever. Real-time resource reservation, real-time flow steering, and all of the features that we would need to optimize and keep 70 or 80 percent efficiency levels or whatever the target would be. And so it really seems to me that the future trajectory of not just the hypervisor but potentially of the physical fabric that is part of a cloud container, as separate from the fabric that is connecting different cloud containers together, that fabric is really going to have to become very tightly integrated with the hypervisor network over time. And it's not a pure, simple commodity thing.
Martin Casado: Exactly. And this is a very important point that you are hitting on. People look at these ventures and they think immediately, 'oh, this is about commoditization.' That is not the case at all. The physical network doesn't go away and in fact, demands on them are going to become very rigorous. Because like you said, you're going to be placing workload in different places. You're going to have different constraints and you have to be enforced by the physical fabric. So for the traditional hardware vendors there is a lot of room to really innovate on creating good, differentiated high-speed fabrics. And what is nice about it is they can actually focus on building a fabric instead of also building something that has to handle all of the configuration of all of this other stuff that it needed for provisioning, that's going to go into the software layer. We don't know exactly how technologies will evolve going forth, but there's definitely still going to be differentiated hardware. I think that the technology is going to morph to allow a lot of the provisioning to happen in software at the edge, and then for the fabric to do the forwarding.
Art Fewell: I anticipate in the coming years we will see a significant increase in software based L4-7 services, and over time improved software techniques and CPU improvements will allow for many hardware-centric services to migrate to software. How do you think hypervisor L4-7 network services will evolve?
Martin Casado: I think there are two things going on here. First is the migration from hardware to software, but that has been happening slowly over time anyway. Many middle-boxes today have minimal hardware offload (say SSL) with most of the function implemented in x86. Where we will see a bigger change is that the services must now become distributed. The software defined datacenter means that any workload can be placed anywhere. In such an environment, you don't want to funnel traffic through a choke point, but rather keep all of the aggregate bandwidth of the underlying network by distributed the services throughout the network.
Art Fewell: For network services that still benefit from hardware, today these services are normally provided by appliances in the stovepipe model which doesnt seem sustainable for SDN. I have seen different proposals as to how hypervisor networks could gain hardware support including ideas utilizing advanced hardware on the ingress physical switch or on the NIC. Do you have any thoughts on this or if/how networking silicon will play a role in the future of hypervisor networks?
Martin Casado: I truly believe that the datacenter of the future will have two types of hardware: a) the x86 CPU b) the switching ASIC. The switching ASIC is responsible for providing switching between large numbers of ports. If you need to move packets between 2,000 southbound 10G ports without oversubscription, you need a switching ASIC. However, over time most other functions are likely to be consumed by software running on a general purpose processor.
Art Fewell: Growth in hypervisor networking could really accelerate a lot of things that would have probably taken forever in traditional networking, one that comes to mind is namespace networking with projects like Serval ... do you see any of these type of innovations that could fundamentally shift the networking stack coming anytime soon (or sooner than they would have otherwise)?
Martin Casado:I believe just having the first hop switch on x86 in the hypervisor is enough to change the industry. Not only does it provide the development cycle of software for new features, but it provides the network with richer semantics about the end state than we've ever had before.