The recent Open Networking User Group (ONUG) meeting in New York City attracted 400 participants, some of whom attended in-depth tutorial sessions about software defined networking (SDN) on day one, and others that stayed for the members-only closed door sessions on day two (vendors and press excluded). Network World Editor in Chief John Dix caught up with Nick Lippis, who co-founded the SDN user group with representatives from Fidelity Investments, for his assessment of what was learned.
What was your takeaway from the recent meeting?
There were a bunch, but one of the big ones I walked away with is that, when we met last February there was a lot of the discussion about physical switches being controlled by controllers using OpenFlow. Now the thinking has shifted to overlay networks and white boxes and Linux automation. When I asked the audience how many are implementing the OpenFlow-based approach on physical switches, I think only one person raised their hand. So that is a major shift in terms of how this community is starting to wrap their minds around which technologies they’re focusing on.
And there was a shift in timeframes, too. We have some real big stakes in the ground around piloting now and into 2014, with deployment in 2015. So it only makes sense that we’re seeing all the vendor announcements this year, and those will ramp during 2014 as the number of pilots increase, and then as you transition from pilots to deployments you’ll start to have market share starting to be locked in. So I think we’re at an acceleration point.
A lot of the folks I talked to say they’re not going to do a kind of hybrid approach, a little bit here and a little bit there. They’re going to get the pilots done as soon as they feel they have the skill sets, then they’re going to go for some pretty big deployments.
Does the shift you mention – the lack of focus on OpenFlow now, for example – represent a potential stumbling block for the movement? There was, after all, so much effort on that front.
The OpenFlow piece on the hypervisor side is alive and well. And there are other protocols that are going to be really important for open networking, like VXLAN and OVSDB. But all the activity has shifted into the virtualization domain.
Keep in mind that, beside the technology integration, the movement involves organizational integration with the rise of DevOps. We’re going to start seeing DevOps have a large and significant influence over network equipment purchases and how networks are designed, because companies want the automation benefits of SDN. Fundamental automation is what is really driving all of this. That’s one of the resounding take-aways from this, because all of the SDN use cases we identified are all about automation, every single one of them.
So you posted a number of potential use cases and attendees got to vote for the ones they thought were most pressing/important. What did you find?
The top three were integrated Layer 4-7 network services, virtual network overlays and branch office wide area networks. And all three have to do with automation.
Regarding the first one, integrated Layer 4-7 network services, the only reason we have appliances, load balancing and VPNs and firewalls and IPSs is to make up for the inadequacies of the TCP/IP protocol. So you have all these separate boxes, separate management systems and separate vendors you have to contend with. And frankly, IT doesn’t want any of it. They’re done with appliances. They want the functions integrated into an overarching overlay strategy so when you fire up a workload you can easily add the various network services you need, whether it’s load balancing or firewalling, and you have one common management system.
We don’t want to deal with how to chain appliances together. We want them integrated and we want to get rid of all of the operational burden that’s required to manage and maintain them. That’s what the first one is really all about.
OK. And the second?
It’s all about open overlays and choice. A good example of that is what OpenStack is offering with Neutron, where the modular Layer 2 plug-in can be anything. It can be VXLAN, it can be GRE, it can be STP (even though that’s proprietary). You use the underlay, the signaling and the routing you have, to support a service that’s overlaid on top of that.
And the service that’s been talking a lot about of late has been Virtual Machine-to-Virtual Machine, but there’s the optical overlay and there’s also the wide area network overlay as well. So what this use case says is that, “Overlays are good. We want them. We want them to be open, and we want choice in how we deploy them.”
How does that differ from the vision that VMware puts forward with its Nicira technology?
That’s primarily focused on the VM-to-VM piece, and that’s one kind of closed monolithic stack. There’s nothing that’s open there. You can’t swap software modules into and out of that stack.
And the third one is about branch office support?
Yeah. At the branch office what the industry has done is stack up appliances for everything from unified communications to WAN optimization, firewalls, routers, wireless network controllers, etc., so if you have 10,000 offices and four appliances per, you have 40,000 appliances spread all over the place. So what this one says is, “We want to integrate all those branch appliances in software and be able to control the bandwidth to those branch offices a lot more effectively and efficiently. I was pleasantly surprised to see that become one of the top three, because it’s a big issue. It’s a huge amount of cost, and it’s a huge amount of operational burden as well.
So the white box option didn’t make the top three, the idea of using software to press generic X86 machines into different infrastructure roles?
We focused on the top three but there’s a bunch of other ones that were just as interesting. But we had gotten feedback from the vendor community saying, “We’re not allowed in the room, so what good is it if you guys just talk to yourselves and don’t provide us any input about what we should do?” So we did the use cases to provide some guidance for the vendor community to prioritize their R&D investment.
The white box piece didn’t make it to the top three, but it doesn’t mean it’s not important. It just means it’s a little further out and the industry is still in learning mode. Some people think a white box is just a cheap box from Taiwan. Others think of it as a top-of-rack switch that has an OpenFlow interface. Others think it’s a switch you can basically buy from anybody and then load an operating system and application on top of that. The latter is where the industry is finding the most interest.
What’s up next for ONUG? When is the next meeting and what do you hope to achieve between now and then?
We will be announcing spring and fall 2014 plans shortly.