n this installment of the IDG Enterprise CEO Interview Series, Arista Networks CEO Jayshree Ullal spoke with Chief Content Officer John Gallant about the reality and hype around SDN, and why the data center requires a different network than your father's general-purpose Cisco net. She also explored how her work at Cisco shaped Arista's strategy, and shared insights on how Arista's partnerships with VMware and Cloudera are making it easier to move to cloud and embrace big data, respectively.
Today, the buzz in networking is all around software-defined networks -- and nothing could make Arista Networks CEO Jayshree Ullal happier. Ullal spent 15 years at Cisco, where she ran the network giant's core switching and data center businesses, before joining Arista, which was founded by Sun Microsystems co-founder and Chief System Architect Andy Bechtolsheim and David Cheriton, a Stanford University professor of computer science and electrical engineering (and fellow Cisco alumnus). Ullal says Arista's data center switches were born to support SDN and provide both the power and flexibility required for today's highly virtualized corporate and cloud data centers. In this installment of the IDG Enterprise CEO Interview Series, Ullal spoke with Chief Content Officer John Gallant about the reality and hype around SDN, and why the data center requires a different network than your father's general-purpose Cisco net. She also explored how her work at Cisco shaped Arista's strategy, and shared insights on how Arista's partnerships with VMware and Cloudera are making it easier to move to cloud and embrace big data, respectively.
Our top five differentiators are all tied to our software."
— Jayshree Ullal, CEO, Arista Networks
There are a lot of networking alternatives out there. Why should someone buy from Arista?
Arista saw three disruptions in the market: a hardware disruption; a software disruption; and a customer buying disruption, which in my mind is the most important thing. You can invent all you want on the technology side, but you have to see the customers changing their market position.
The hardware technology disruption was that in the 1990s, the only way to build any kind of high-speed networking was through your own in-house ASICs [application-specific integrated circuits] and specialty chips. That's not true anymore. We have from three to five vendors available, whether it's Intel, Broadcom or others, supplying us much of the silicon. They are sometimes an order of magnitude better in power, footprint, density, latency, and performance and scale. Arista was able to take advantage of that disruption in hardware.
[ TEST: Arista 10G switch is fast, flexible ]
The second is software. We were very inspired by Cisco's software focus on the enterprise side, Juniper's on the service provider side, and we saw that we could build a purpose-built, modern operating system only for the data center and the cloud. We didn't try to do it for general-purpose networking. We really focused on our mission, which is high-performance applications for the data center and cloud. It's called Extensible Operating System (EOS) and there is no networking operating system that is as modern, self-healing and resilient, and [designed for the cloud].
And the third, speaking of that, is the cloud itself. The enterprise market is shifting. Every CIO is being demanded a strategy on what they are doing with the cloud in terms of applications and infrastructure. Whether it's a private cloud, a public cloud or a hybrid cloud, these are becoming an important piece of the strategy. As Amazon innovated on the application side, you can think of Arista as really providing that market disruption on the networking side.
Explain the cloud angle in a little more depth. What were you setting out to do to support or enable cloud?
More and more people are outsourcing to modern applications -- whether it's Salesforce.com or Amazon itself. [They're supporting] high-performance computing, or high-frequency trading or, increasingly now, big data and network virtualization. The network infrastructure needs to adapt. It cannot be so monolithic. It cannot be one physical port equals one VLAN equals one network switch. It really needs to be much more massive in scale. A typical enterprise network is a 10,000-node, three-tier network, and we were able to build a much flatter, fatter topology at Layer 2 and 3, using what we call the leaf-spine architecture that can scale to 50,000 to 100,000 nodes. That was our first premise.
The second [thing we focused on] was application delays. Don't build a network as a cost center, but really build it as a profit center by addressing the applications themselves. We early on entered the high-frequency trading market to understand their trading algorithms, map it to the latency requirements. That became an instance of a high-performance financial cloud where they started building the network for that application separate from the enterprise network.
In Silicon Valley, a large number of Web 2.0 providers, whether they're search engines or social networking, the kind of scale they build is just unbelievable. It's 100,000 nodes, and increasingly, one machine, one physical server, is not one node. That's 20 virtual machines, which means you could be enabling 100,000 physical nodes but you are really enabling 1 million virtual nodes. There's huge virtual machine sprawl and physical sprawl. The CPU at one point wasn't being fully utilized. But now, with the new multi-core CPUs, the pressure is back on the network. That's why whether it's a private or a public cloud, the Web 2.0 companies are moving massively to high-density 10G, 40G and 100G [networks] that are requiring a new type of architecture and new software as well.
What are the things that make you different than a general-purpose networking company like Cisco?
At the highest level I would say our software, our EOS. It's open, it's built out of straight Linux. But then we added what we call multi-processing, state-oriented software that allows you to do the kind of things that you could only do in mainframes and servers. It's funny how hardware changes every 18 months in networking, but software doesn't change for decades and has remained monolithic for so long. Our top five differentiators are all tied to our software.
The first is that we build, without using any proprietary components, active/active networks that can scale to 50,000 and 100,000 nodes. Other companies try to do that with proprietary technologies. You may be aware of Juniper's QFabric or Cisco's FabricPath and OTV [Overlay Transport Virtualization]. We are able to do it in a standards-based fashion, and every one of our networks interoperates with Cisco routers, Juniper switches, NetScreen firewalls, you name it.
The second is, because of the software, we were able to bring to the data center and cloud what we call self-healing resilience. Usually, redundancy and resilience means buy two of everything and connect them in case one fails. It's great for the vendor to get two of everything. But we were able to do it right in our software. Today, you look at software agents and how they interact. If you have a memory leak in software today, and the agents talk to each other in a traditional network operating system, they do so with something called IPC, inter-process communication. But think of the cloud where you have, like we described, 100,000 of these, the multiplier effect of failure is huge with this inter-process communication. Arista chose a publish/subscribe model using a built-in SYSDB database, where the state of every software agent is stored. Because that's not human-generated, it's the most resilient piece of code. Let's say you have a failure. We automatically track the failure and contain it. Then we repair it. We actually spin up a new agent. Today's enterprise agent manager has no maintenance windows. So they don't have to know.
[ RELATED: Arista takes on Cisco in SDN analyzation ]
The third [differentiator] is that we are open and programmable. You hear a lot of talk about SDN these days, and one has to separate the hype from the reality. The essence of SDN to me is, first of all, build open interfaces and allow your customers to write to their applications through our APIs at the northbound level, and at the southbound level our devices must be programmable. We didn't call it SDN back when we developed this, we called it EOS. The extensible in EOS is [in reference to the operating system being] very programmable. Every aspect of our software, whether it's at the hardware plane, at the device plane or the software plane, can be programmed. That's a huge advantage. We find ourselves in a fortunate position that as the SDN market is evolving, our network is already open and programmable and SDN-ready.
The fourth one is big data analysis. Data analysis and traffic visibility is becoming a real weakness, because, as you know, we can all talk about improving price, performance and CAPEX, but the biggest cost center in networks is OPEX. There are three ways to solve OPEX issues: Stop buying gear, outsource your gear or make your technology do better work. We believe technology to solve the problem is far better than outsourcing or throwing people at the problem. We call this "from A to Z analysis." We can do automation, we can do zero-touch provisioning, we can do a suite of functions here because data is coming at such amazing speeds, structured and unstructured, how do you sort out what's relevant and how do you monitor, how do you tap, how do you do real-time captures at 10 gigabits and terabits when the data is moving so fast? We're not just building enterprise features. Cisco's done that really well for the last two decades, that's their market. But yet if you look at the way servers are sold today, only half of them are going into an enterprise application. The other half, which are high-performance computing and Web, are going into the cloud applications. They don't require traditional enterprise features. Just like mainframes moved to client server, enterprises are moving to more HPC and Web, and those features are much more about reducing OPEX and improving the orchestration and traffic visibility and data analysis.
The fifth and final differentiator is network visualization. What VMware did to servers with server virtualization, we believe jointly working with VMware we can do with network virtualization. VM sprawl has created network sprawl. Arista and VMware, together with a number of other vendors, Broadcom, Cisco, etc., defined to me what is one of the most breakthrough specifications in our industry -- VXLAN, virtual extended LAN. The VLAN, as a unit, is something we all grew up with and invented back in the '90s. It's been with us 25 years, way too long. VLAN boundaries have plagued the deployment of virtualization because you're limited to 6,000 VLANs or 16,000 VLANs, and you've got many more virtual machines. So therefore, you've had a vi-admin manage one, the virtual network, and the command line interface or Cisco admin manage the physical network. These two worlds need to come together. Arista, working particularly closely with VMware, has been able to bridge that gap between network physical and network virtual, using VXLAN. VXLAN all of a sudden opens up the boundary from 16,000 to 16 million possible entries. So we're very excited with the technology we demonstrated at [the VMworld conference].
Is it deployed now in the market?
Very early. We are one of the first to come out with it. We showed it August 2012, and we showed interoperability with VMware, EMC and F5. We shipped a product based on it, the Arista 7150, in November.
Say I'm a big Cisco installation today. When would I talk to Arista? What's the need that opens the door?
It could be project-based or it could be a strategy. When it's project-based, it's usually that you're deploying high-frequency trading or you need a high-performance compute solution, usually InfiniBand and Ethernet get reviewed. Sometimes InfiniBand gets chosen because the supercomputer guys really like it and other times it's high-density 10GB. Another application is big data. Storage is no longer just a fibre-channel SAN -- you will start needing 10GB storage for iSCSI or more and more Hadoop clusters with direct-attached storage. That becomes another very interesting Arista project. Virtualization, the VM sprawl. Another one we're starting to see more of is huge media rendering, and video applications that are pushing the envelope of bandwidth. Where the application intersects the network is the common theme through all the projects.