Juniper switching boss talks technology challenges, Cisco Nexus 6000

Juniper's Jonathan Davidson says Virtual Chassis QFX ToRs and Microfabric pods interconnected with the new EX9200 will drive simplicity, automation

1 2 Page 2
Page 2 of 2

If we look at how customers have evolved over time, in talking through where we were with the solving the biggest problems first, we came out with a 128-node system, and then last summer we launched the 16-node Microfabric. What we have found is that customers' evolution in thinking about what they call failure domains has evolved over the past five years. If you went to many customers five years ago they would say, just give me a bigger and bigger and bigger switch. Many customers are still comfortable with the 6,000 10G ports in a single domain. But there are certain customers who want a smaller failure domain. And so they will go and purchase multiple versions of a Microfabric for a single data center and then they will go and connect those Microfabrics together with the next layer of switching. Before the 9200 was available, one customer had multiple Microfabrics connected together through a [Juniper] MX [router]. They decided to collapse the core and data center edge together into one environment. Now we expect the 9200 to sit at that layer and offer that interconnect between multiple Microfabric pods. It wouldn't be an Interconnect per se but it would be a switching layer between the two pods. You could connect the Microfabrics together as well, if you wanted to. But we think most customers would likely have a second layer of switching on top.

Why wouldn't a 3000-G play that role?

It comes back to failure domains. Some customers just simply want to have pod sizes up to 768 10G ports. That's about their comfort level with a single failure domain. In the traditional two-tier architecture, that would be the aggregation box [supporting] up to 16 40G links going down. With Microfabric, it's all one level. But still, their comfort level is around how many 10G ports. So it comes down to how many applications can I risk losing connectivity to in a given point in time? It just comes down to their belief structure. Not any technical reasons why, it just comes down to their belief structure.

You can use the G fabric as the interconnect and go all the way up to 6,000 ports of 10G. I can go from port 1 to port 5,560 with the same latency that I can go from port 1 to port 3. That's something that is really compelling for them because if I have multiple Microfabrics and I go through that second level of switching hierarchy, my latency's going to change. If I'm the network operations team I can't guarantee the latency between all applications inside of my data center. That's really what the customers ask themselves in determining whether they want the Microfabric or the G.

We solved the biggest problem first and since we launched the Microfabric we've seen significant traction in that particular space. The Microfabric actually fits the majority of sizes of most customers' complete data centers. The majority of data centers today are less than 1,500 gig ports. You might imagine then, do I need to buy a 6,000 port thing that I know I'll never scale to? Or am I OK with one or two Microfabrics?

So at first release, QFabric was a solution looking for a problem.

No, it was a solution for the largest of customers who really wanted to have any-to-any connectivity between a very large number of ports. The traction in G continues to do very, very well.

How's demand for single-tier?

I would say that demand for a single-tier solution and a fabric-based solution ... Customer's don't think from a single-tier perspective, they think from an attributes perspective. What are the attributes I care about? I care about simplicity. Can you give me investment protection? I might want to go to a virtualized infrastructure in a year or two. I may want to go to an overlay infrastructure in a year or two, or three, or five. We want to make sure our fabric technologies give the customers the ability to be the best underlay for the overlay, and the best underlay for a virtualized environment. We have to make sure our customers are able to have the greatest experience from an attribute perspective. So it's all selling. It depends upon which attributes the customer cares about more. As we have a simplified approach to our architectures and our building blocks -- Virtual Chassis on the QFX 3500 and 3600 -- you're going to be able to have a clear and consistent path to more flat as time goes on.

On the Path to Flat, is single-tier ever applicable in the campus?

What we hear from our customers around campus is specifically around similar types of issues. They're not saying "I have 1000% growth in East/West traffic every three months." That's not the problem. But they do care about simplicity. And they do care about automation. When you start to see some of the similar things that you're hearing, I do think that some of them will start to move over. Hence, the EX9200's applicability in the campus as well. So being able to take applications and services and run them on a common core platform, and is you think about an access point. Enterprise already has a wireless LAN SDN-type of solution. So what we want to do over time is actually bring those two elements together, which we talked about in our launch a few weeks ago. We see that as the first step toward making the campus environment a simpler place to do networking and network automation.

Where does your "Simply Connected" EX portfolio fit into all of this?

All EX platforms run Junos. Wherever we can go out and put OpenFlow on all of these platforms, we absolutely will. The reason I give it a qualifier of "wherever we can" is simply because we want to make sure we have right restrictions of our messaging to our customers appropriately. That said, we have publicly come out and stated which platforms will have OpenFlow by the end of this year, we've had OpenFlow out in demo version for well over a year. We have OpenFlow in a production network on our MXes that's running 100G through the MXes. That same OpenFlow code is going to be because it's Junos. It will run across the EX portfolio as well as the QFX portfolio at the same time. The team is hard at work at making that happen and it's simply a matter of time, not of will.

Why not converge the programmability and logical scale of the EX9200 with the low latency, single-tier characteristics of QFabric in one platform?

All of [the programmability] of Junos Virtual Control is applicable to both. Over time, you shouldn't be surprised if you start to see a simplification of how things are going to go, a simplification of building blocks, a simplification of architectures, and a simplification of where we're heading. So, simplification is key.

The EX9200 is targeted at Cisco's Nexus 7000 "M," QFabric at the Nexus 7000 "F" -- what's targeted at the Nexus 6000?

We believe that it's focused primarily on a very specific market in the financial sector. They predominantly care about latency. When you look at customers who care more about simplicity, automation -- what can I see inside of the network? -- then you have to make other trade-offs inside of the silicon. I can do on-chip memory or put those tables outside the chip. My tables can be much, much bigger -- offer logical scale, number of VLANs, number of routes, number of other things. But it means my chip's going to be a little bit smaller because I have to go off-chip, get what I need and then come back onto the chip. In order to go down that low latency path at the aggregation layer, you basically said, "I am not going to care about large logical scale." There are trade-offs that have to be made from a visibility and reporting perspective because you're not going off-chip and everything is on-chip. So knowing what I know from their data sheets, and knowing what I know from what they're doing from a latency perspective, it's all on-chip, which means they've had to make some pretty tough choices around how much logical scale that box is going to be able to do. So for customers who are in very large virtualized environments, they are going to run out of logical scale. And I'm not saying that that's the case with that platform; but I'm telling you the trade-offs you have to make from a silicon perspective. We fundamentally believe that, in the kinds of environments that the majority of data centers have today, they want large amounts of logical scale because of how VLANs are deployed today; and because of the tight packing of virtual machines on servers. So the fundamental belief is that aggregation box, the 6000, will be targeted to customers who care only about latency. There are other trade-offs they had to make to go into that market.

Is there any concern that your MX router customers will demand EX9200 prices since that switch is based on the MX?

These are two fundamentally different products. They certainly have some common components -- power supplies, fans, some of the other technology is similar from a DNA perspective -- but if you were to go and fire up an MX and look at the features and functions that are one of them versus the other, they are vastly different. So it's not a one-to-one replacement of products. The MX does not have a lot of the Layer 2 features that are on the EX9200. If you fire up an EX9200, it's a switch. There are a number of features that are on the EX9200 that are just not on [the MX] because it is not a switch.

Will QFabric eventually be based on custom silicon, either new or re-purposed?

We are continuing to invest in both hardware and software for QFabric. And I'm looking forward to talking to you again later this year about all of the things we have coming on that platform.

Copyright © 2013 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
The 10 most powerful companies in enterprise networking 2022