Skip Links

Juniper switching boss talks technology challenges, Cisco Nexus 6000

Juniper's Jonathan Davidson says Virtual Chassis QFX ToRs and Microfabric pods interconnected with the new EX9200 will drive simplicity, automation

By , Network World
April 23, 2013 02:25 PM ET
Jonathan Davidson
Jonathan Davidson

Network World - Jonathan Davidson took over Juniper campus and data center switching when the two previously separate business units were combined following the departure of founding engineer R.K. Anand. Davidson has a service provider routing background at Juniper and Cisco, which is no coincidence -- after five years in switching, Juniper has been unable to mirror the success it had in its first five years in service provider routing. But it did start from zero and surpassed at least six other incumbent vendors to attain the No. 3 position in the market. The company has more than 20,000 switching customers cultivated through organic development, Davidson notes. And as Juniper moves forward amid a forklift upgrade facing its EX core switch base and after an initial misfire on the QFabric data center switch, it's focusing on customer demands for simplification, agility and automation. Davidson discussed some recent and future developments in Juniper enterprise switching.

Why did Juniper combine the data center and campus units?

When you're fundamentally trying to change an industry that hadn't changed in over 15 years or longer, you need to make sure you have a high performance team together, you need to make sure they're not distracted. So we created a business unit that was targeted for fundamentally disrupting the data center space, and that was our QFabric solution. But once you actually have that product out into the market, you actually get to a point where you want to find more synergies between these different organizations. We wanted to make sure that we were able to leverage the best of the EX product portfolio as well as the innovation we saw and continue to see in the QFabric portfolio. In bringing them together, we are able to leverage the best from both, and really enable our customers to have more choice.

But aren't the needs of the campus and data center drastically different?

If you look at the fundamental building blocks for technology and how we view things, I'm going to have to switch a Layer 2 packet whether I am in the data center or campus environment. So why have two different stacks of technology that are going to do almost the same thing? You're right in that there are unique requirements to both; that's how you actually package the systems together. Whether traffic runs East/West or North/South depends more on the construct of the system rather than the underlying technology. Many customers use the same core switching platform for both their data center and campus environment. That's why customers have embraced our Virtual Chassis technology. They'll use the same Virtual Chassis in the campus and data center.

So will EX and QFabric eventually share the same ASIC and code base?

Whether you use an EX platform or a QFabric platform, it's running Junos. It's about simplifying the operations for our customers and that can happen across either one of the architectures or platforms or products our customers decide to go with. We fundamentally believe that if you look out five to 10 years from now, we call it the Path to Flat. We truly believe almost every data center is going to be a flat data center. We've translated flat to mean fabric. Any-to-any connectivity in the data center is important. If you truly have a flat network, you can have deterministic latency. In simplifying the Path to Flat, one of the things we're going to start to do is actually bring these two technologies together. So one of the things we're going to start talking to our customers about here pretty shortly -- we haven't gone broadly with this yet -- we're going to take that Virtual Chassis technology that tens of thousands of customers have deployed and put that onto our QFX top-of-rack switches.

What it means is, I can start with a QFX ToR, have QFX at the top-of-rack and aggregation layer, and run that entire thing in a Virtual Chassis-based network. If I decide that I want to go even more flat, I don't need to throw any boxes out, I don't need to re-cable; I simply need to change the software and the configuration and actually add the QFabric Director and then I have a completely flat network and a centralized point of management, and I am able to grow from a few dozen 10G ports up to 6,000 10G ports without having to rip and replace any portions of my network, or re-cable.

If you have our Microfabric [the QFabric 3000-M Interconnect], you are able to go from zero to 768 10G ports, the QFX can act as the interconnect as well, and you can grow with that. We think that the interconnect is something that's a critical component and the [Broadcom] silicon family that we're using today will be able to continue into the future. We will use the most advantageous silicon for our customers. What's important to them is simplicity. But at the end of the day, 98% of our customers don't care what silicon is in the platform. They want to make sure that we're meeting their requirements or making sure that it's simple for them to use, and that they get the right price and performance.

What about Virtual Chassis for the QFabric Interconnect?

We'll be talking more about that at the end of the year.

[ THE BIG PICTURE: Juniper CEO Johnson talks software, the company's recent challenges and key future directions ]

What's selling more or in greater demand: the QFabric 3000-G Interconnect or the 3000-M?

One of the things that we have found is Juniper always tackles the hardest problems first. And I think it always doesn't get the credit for doing that. Solving the hardest problems isn't necessarily solving the sexy problems. When we go out and try to fundamentally change the way data center networks have been built for the past two decades, we came out with our QFabric single tier solution. And we decided to come out with a solution that scaled to over 6,000 10G ports in a single fabric. We could have easily come out with the smaller fabric first. But when you start to look at the logical scale issues, the issues that have to do with keeping 128 nodes all in sync at the same time ... if you solved for the small problem first you would have run into scaling incrementalism over time, and it would have taken us a much, much longer time to get to the scale that's necessary. That's one of the fundamental reasons we haven't seen any other vendor in our space come out with anything that looks remotely like this. The problem that we solved was a hard one.

Multichassis is pretty hard to do. Think of QFabric as a 128-node multichassis system that acts as a common, single fabric. That's the scale of the problem that we solved, and when you look at what QFabric actually did, all of the components and what it looks like, I'll call SDN Version 1. You have an external director controlling the various nodes; you have an interconnect that it can control as well; and you can provision everything through a single point of management, with an out-of-band control plane. When we started building this there was no term called SDN. We solved the problem internally with all open, standards-based protocols. We use BGP to communicate inside of the fabric. SDN Version 2 from Juniper is going to be a combination of SDN Version 1 plus some of the things Bob Muglia mentioned around 6-4-1 and obviously the Contrail controller is going to be a big portion of how all of this fits together into what I call SDN Version 2.

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News