At this week’s Open Network Summit, Google spoke for the first time publicly about its custom data center network. For nearly a decade, we’ve been hearing, reading and writing about how Google was building its own switches and writing its own software to handle the tremendous traffic load on its search engine and applications because vendor offerings were either not up to the task, too expensive, or both.
This week we found out how they did it. In a keynote presentation at ONS, Amin Vahdat, Google Fellow and Technical Lead for Networking, described the company’s data center network architecture, capabilities and capacity for a rapt audience thirsting for information on software-defined networking implementations and experiences.
Vahdat summarized his talk here and offered use of the architecture to external developers through the Google Cloud Platform.
To summarize Vahdat’s summary:
- The network is arranged around a Clos topology where a collection of small, cheap switches are grouped into a much larger logical switch.
- Google uses an internally written centralized software control stack to manage thousands of switches within the data center and treat them as one large fabric.
- The company’s current generation Jupiter fabrics are designed to deliver more than 1 Petabit-per-second of total bisection bandwidth, enough for 100,000 servers to exchange information at 10Gbps each, or enough to read the entire scanned contents of the Library of Congress in less than 1/10th of a second.
- Over the past decade, Google has increased the capacity of a single data center network more than 100x.
- And in building its own software and hardware, Google relies less on standard Internet protocols and more on custom protocols tailored to its data centers, and perhaps others.
States Vahdat in his blog:
Our network control stack has more in common with Google’s distributed computing architectures than traditional router-centric Internet protocols.
Perhaps vendors snubbed by Google these past 10 years can learn something about data center network product development from the hyperwebscale company. The key impetus might be how attractive the architecture is to external developers.
But then, is it the Google data center network architecture that attracts them? Or is it Google itself…
The degree to which the industry can benefit from their experience may hinge on how much Google and Microsoft share with the industry not only their experiences, but actual code, through open source and other means. Cloud operators and enterprise users are being pressed at ONS this week to not only use open source for their SDNs, but contribute to the open source SDN community as well.
But as Microsoft Azure CTO Mark Russinovich said at ONS this week, that decision is not an easy one – it comes down to determining what the cost and benefit is to the contributor, the benefit to the community, and what constitutes “secret sauce” intellectual property vs. shareable development.
More from Cisco Subnet: