Skip Links

Google's software-defined/OpenFlow backbone drives WAN links to 100% utilization

A Q&A with a principle engineer on the motivations for going with OpenFlow, the learnings, what's next

By , Network World
June 07, 2012 01:46 PM ET
Amin Vahdat

Network World - Google, an early backer of software-defined networking and OpenFlow, shared some details at the recent Open Networking Summit about how the company is using the technology to link 12 worldwide data centers over 10G links. Network World Editor in Chief John Dix caught up with Google Principal Engineer Amin Vahdat to learn more.

Why did you guys set out down the OpenFlow path? What problem were you trying to solve?

We have a substantial investment in our wide-area network and we continuously want to run it more efficiently. Efficiency here also meaning improved availability and fault tolerance. The biggest advantage is being able to get better utilization of our existing lines. The state-of-the-art in the industry is to run your lines at 30% to 40% utilization, and we're able to run our wide-area lines at close to 100% utilization, just through careful traffic engineering and prioritization. In other words, we can protect the high-priority traffic in the case of failures with elastic traffic that doesn't have any strict deadline for delivery. We can also route around failed links using non-shortest path forwarding, again with the global view of network topology and dynamically changing communication characteristics.

RELATED: Google shares lessons learned as early software-defined network adopter

Standard network protocols try to approximate an understanding of global network conditions based on local communication. In other words, everybody broadcasts their view of the local network state to everybody else. This means if you want to affect any global policy using standard protocols you're essentially out of luck. There is no central control plane that you can tap into. So what OpenFlow gives us is a logically centralized control plane that has a global view of the entire network fabric and can make calculations and determinations based on that global state.

One hundred percent utilization is incredible. And you can do that without fear of catastrophe?

Right, because we can differentiate traffic. In other words, we are very careful to make sure that, in the face of catastrophe, the traffic that is impacted is the relatively less important traffic.

Is control of the network completely removed from the routing hardware and shifted to servers?

You used an interesting word -- completely. There's going to be some vestiges of control left back on the main device, but maybe for simplicity's sake let's say it's completely removed. We've shifted it from running on an embedded processor in individual switches -- and that embedded processor is usually two or three generations old; if you open up a brand new switch today it wouldn't surprise me if you found an 8-year-old Power PC processor -- to a server, which could be the latest generation, multicore processor, etc. So getting 10X performance improvements is easy and even more than that isn't hard.

I understand you built your own gear for this network?

We built our own networking gear because when we started the project two years ago there was no gear that could support OpenFlow.

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News