Cisco's data center guru talks direction, strategy

John McCool addresses everything from market shifts to computing paradigms and service integration.

As a 13 year Cisco veteran, John McCool, senior vice president and General Manager of Cisco's Data Center Switching and Services Group, has seen a boatload of change. He is responsible for the strategy, engineering and marketing of Cisco's family of enterprise Ethernet switching solutions, including the Catalyst series, the Nexus data center switches and the MDS storage area network line. Network World Editor in Chief John Dix and Managing Editor Jim Duffy recently got McCool on the phone to find out what he sees coming down the pike.

You face renewed competition on many fronts and the core technologies continue to evolve. How do you see the market changing?

It has always been a highly competitive market, and we have tried to set the pace through innovation focused on convergence of solutions over IP and the development of services on top of that. That strategy has not wavered at all. For example, we integrated layer 3 technologies with layer 2 and showed the value to the marketplace. And we showed how we could integrate TDM networks with voice over IP and led in that architectural innovation. What has changed are the frontiers. There is the convergence of compute and storage transport over IP. There is the focus on the integration of wired and wireless technologies over switching fabrics, and the integration of security that comes with that. So, there are constantly new ways to apply innovation.

Data center network brawl

HP and 3Com have stepped up their competitive efforts, both of them pushing a value story, and Juniper is making more noise in the data center, what kind of market pressure is all of this putting on you guys?

We've seen competitors ebb and flow in terms of their focus on this marketplace. Certainly we see a lot of competition that is strictly focused on price, and while that is one important part of competition, I would submit that value has a lot to do with your ability to simplify customer operations.

Look at the longevity of our 6500. We have had multiple product transitions within that platform. Customers running large networks have all three generations of that product and have a consistent set of operations across them, have the ability to integrate services, and that brings tremendous value.How sensitive are customers to pricing, particularly in this economic environment?Companies crunched for capital begin to look at near term cap-ex and sometimes lose focus on long term op-ex. We've seen more focus in the low end of our product lines, the Nexus 2000 and the Nexus 4000, but as things start to pick up we begin to see more of the normal pattern, where people are looking at the network broadly and looking at integrated systems that are going to drive their productivity and growth.How do you see the evolution of virtualization and the emergence of this cloud stuff changing data center network design?

If you look what we did with our Unified Computing System (UCS) and Nexus, we placed a big bet about four years ago that virtualization was going to be a very significant market transition and I think we called it right. It would have been easy to just continue the evolution of our 6500 family, but we felt strongly that we needed to look at how compute evolves to deal with virtualization, and how the network fabric and compute become very integral.

Now we're seeing customers looking at Nexus as a way to scale virtual machines beyond the single server. They are starting to think, how can I scale this to a rack of servers, to multiple racks of servers, and ultimately, move virtual machines from data center to data center. Our own internal IT group is thinking about this technology as a way to migrate applications across a broad infrastructure – network, compute, storage – so they can take equipment out of operation without scheduling maintenance. That is huge win.

The networks have to be designed to deal with virtual machine mobility in a fundamental way. So, how does my network policy migrate with those virtual machines? That is an architectural challenge that we have taken up with our Nexus 1000V. It can move with the virtual machine. All the [access control lists] and control points that our network administrators have come to trust in their physical designs now can be applied at the virtual machine level.

But you have to think about how services connect to these virtual machines. We are just starting to see the first wave of this, but I think this is going to be a huge trend for the next three to five years.

You mentioned UCS. Give us the elevator pitch, why should Cisco be one of my compute suppliers?

A lot of people scratched their heads and looked at this as yet another entry into the blade market. But I come back to virtualization being a very fundamental shift. We think the existing blade market did a nice job on what I would call mechanical innovations. Improving power, improving cooling, reducing cabling, etc. But there was an opportunity to take that a step further.

As you see the explosion of multi-core processors, the only way to take advantage of them effectively, without writing a lot of applications, is through virtualization. But the challenge becomes, what is the architecture of the I/O, the connection of those virtual machines on those servers to the network? That is basically a network problem and where we have provided some foundational innovation. The investment we made on Fibre Channel over Ethernet to converge the fabric with an industry standard approach, that was a key component of UCS. So, it fits into this entire network-based data center architecture that we have come up with.

Isn't it harder for you to enter computing than it is for the big computing guys to add networking?

It really depends on where you think the puck is going and if there is an innovation vector involved. If you believe that computing is Intel-based, commodity, white label things, you even question the value existing vendors bring. And maybe part of the market does go that direction. But if you look at what happened with blades in the last three to five years, people began to innovate in terms of the system architecture and, while we think that was a step in the right direction, we don't think they went far enough.

We believe you can design systems and products that work better together, that are based on industry standards, that provide value to the customer and diminish the total cost of ownership, diminish the need for integration services at the product level, and allow the customer to spend service money on services that help integrate those products and systems into business processes. They don't want to spend time and money just integrating 20 blades. They expect that to work out of the box. So that is the opportunity for Cisco.

I would contend that there are huge areas of networking that haven't even been scratched by the system vendors coming into this space. Where is your BGP support? What is your IPv6 strategy? How are you doing dealing with MPLS, VPLS? Oh, by the way, there are new standards coming on layer 2 to deal with virtual machine mobility. This is a complicated space, and our customers' networks range from modern day that were built in the last three years, through legacy networks that they built 10, 15 years ago and they are expecting technologies that can bring in, adapt, and migrate to over some period of time.

You say standards-based and when we asked HP about UCS recently they called it "a closed architecture with proprietary compute technologies."

Let's face it, all blade systems have been closed. IBM, HP, you buy the blade from that vendor and put it into their rack, right? And if you look at something like the HP C-series with virtual connect, even the network connection has been proprietary, and now you see HP respond with industry standard FCoE out of the ProCurve division. So what is the right architectural approach? It is posing a quandary for their customers.

Our architecture is based on a unified fabric. With a UCS system, you could take white label servers connected to FCoE, and have a consistent network-based architecture based on industry standards, IP, Ethernet. I don't think the system suppliers are used to competing in an open-based market. They have a model that requires certification of storage technologies, applications over their closed systems. IP has always been based on open systems. You plug in a NAS device, a camera into IP, you expect it to work, right, whether it is my switch or someone else's switches. This is a fundamental shift for the system suppliers.

Are there any plans to support third-party blades within the UCS enclosure?

Nothing I can announce today.

HP bought EDS. Dell just bought Perot Systems. What is Cisco doing to ramp up its professional services capabilities for the data center transformation opportunity?

The question here is, is this vertical integration of services the appropriate thing for customers? What we hear from customers is a concern that they might not be getting the best technologies. We believe in moving through partners that can provide a great deal of focus on specific customers, specific verticals, be very intimate with their business needs, and take our products and leverage them to drive a higher degree of business value.

Are you significantly increasing your investment or staff in professional services?

We have a great deal of focus around Nexus and UCS in terms of that migration, and a lot of investments, not just in our own services, but partner enablement, which I think is a fundamentally different approach.

It's hard not to talk about all this without bringing up cloud computing, and John Chambers has called security in cloud computing a nightmare. Do you have any broad architectural initiatives under way to address this?

Absolutely. We look at new challenges that come with transitions as an opportunity to innovate. In the physical world, what people do is bind their network configuration to their security solution. In clouds the binding of security policy has to be much more dynamic, be able to recognize application mobility, and we think that fundamentally fits into a network type of approach. If you think about routing, this is a distributed configuration type of problem. This challenge around mobility and security lends itself to a network-based approach.

So we will see an architecture strategy from Cisco specifically for securing that cloud and virtualized environment?

Absolutely. Not only the challenges of securing it, but balancing load across data centers and making the network aware of application mobility. It is a huge architectural challenge. And tying this back to the question about competing on price, unless you are investing R&D on these new challenges, you won't be able to sustain yourself in the switching business. We've seen multiple of these transitions over the years.

How about on the disaster recovery side, does cloud computing and a mobile VM environment make that more challenging?

If it can all be made to work, the technologies offer tremendous advantages. A lot of [disaster-recovery] environments today are designed for pure fail over, and the state of both data centers has to be maintained. Networks have to be configured the same, servers configured the same, so applications can just boot over. With virtualization, the hardware is abstracted in both data centers so it could be much more simple.

The advantage of a virtualized environment is from the application perspective. If you are looking out from the server, things look the same. The view of your network, the view of your storage resources look consistent on old and new technologies. The challenge has been that, if you are looking from the network in, you have no visibility into these virtual machines. You can't see them. You see the physical port. What we have done with the Nexus architecture is to provide transparency at the virtual machine level, and that is fundamental if you are going to move into this new paradigm.

Regarding another data center topic, 10G, where would you say we are in the migration at this point?

In the data center the adoption of 10 gig between server access switches and either the data center core or distribution switches is happening in full force. As people start to focus on collaboration technologies, multimedia and video moving to the desktop, we are beginning to see links between access switches and the campus core or distribution switches migrate from gigabit Ethernet to 10G. I wager the 10G market is going to pick up.

We are also starting to see interest in the new 10 gig to the server technologies, and that is another area where we'll see a lot of growth in the next two to three years.

Do you have a rule of thumb about how many ports of Gig-E you need before it makes sense to upgrade to 10 gig?

There is probably a cabling cost trade off that is somewhere around 4 Gig-E. The other dynamic I would point to is the collapsing of block-based Fibre Channel networks over 10G using Fibre Channel over Ethernet (FCoE). This whole concept of unified fabric is important, not only from a cost perspective, but an architectural perspective, and what I mean by that is, if you have unified I/O in your data center, any application anywhere can support file based storage or block-based storage, intrinsically.

1 2 Page
Editors' Picks
Join the discussion
Be the first to comment on this article. Our Commenting Policies