As a 13 year Cisco veteran, John McCool, senior vice president and General Manager of Cisco's Data Center Switching and Services Group, has seen a boatload of change. He is responsible for the strategy, engineering and marketing of Cisco's family of enterprise Ethernet switching solutions, including the Catalyst series, the Nexus data center switches and the MDS storage area network line. Network World Editor in Chief John Dix and Managing Editor Jim Duffy recently got McCool on the phone to find out what he sees coming down the pike.
You face renewed competition on many fronts and the core technologies continue to evolve. How do you see the market changing?
It has always been a highly competitive market, and we have tried to set the pace through innovation focused on convergence of solutions over IP and the development of services on top of that. That strategy has not wavered at all. For example, we integrated layer 3 technologies with layer 2 and showed the value to the marketplace. And we showed how we could integrate TDM networks with voice over IP and led in that architectural innovation. What has changed are the frontiers. There is the convergence of compute and storage transport over IP. There is the focus on the integration of wired and wireless technologies over switching fabrics, and the integration of security that comes with that. So, there are constantly new ways to apply innovation.
HP and 3Com have stepped up their competitive efforts, both of them pushing a value story, and Juniper is making more noise in the data center, what kind of market pressure is all of this putting on you guys?
We've seen competitors ebb and flow in terms of their focus on this marketplace. Certainly we see a lot of competition that is strictly focused on price, and while that is one important part of competition, I would submit that value has a lot to do with your ability to simplify customer operations.
Look at the longevity of our 6500. We have had multiple product transitions within that platform. Customers running large networks have all three generations of that product and have a consistent set of operations across them, have the ability to integrate services, and that brings tremendous value.How sensitive are customers to pricing, particularly in this economic environment?Companies crunched for capital begin to look at near term cap-ex and sometimes lose focus on long term op-ex. We've seen more focus in the low end of our product lines, the Nexus 2000 and the Nexus 4000, but as things start to pick up we begin to see more of the normal pattern, where people are looking at the network broadly and looking at integrated systems that are going to drive their productivity and growth.How do you see the evolution of virtualization and the emergence of this cloud stuff changing data center network design?
If you look what we did with our Unified Computing System (UCS) and Nexus, we placed a big bet about four years ago that virtualization was going to be a very significant market transition and I think we called it right. It would have been easy to just continue the evolution of our 6500 family, but we felt strongly that we needed to look at how compute evolves to deal with virtualization, and how the network fabric and compute become very integral.
Now we're seeing customers looking at Nexus as a way to scale virtual machines beyond the single server. They are starting to think, how can I scale this to a rack of servers, to multiple racks of servers, and ultimately, move virtual machines from data center to data center. Our own internal IT group is thinking about this technology as a way to migrate applications across a broad infrastructure – network, compute, storage – so they can take equipment out of operation without scheduling maintenance. That is huge win.
The networks have to be designed to deal with virtual machine mobility in a fundamental way. So, how does my network policy migrate with those virtual machines? That is an architectural challenge that we have taken up with our Nexus 1000V. It can move with the virtual machine. All the [access control lists] and control points that our network administrators have come to trust in their physical designs now can be applied at the virtual machine level.
But you have to think about how services connect to these virtual machines. We are just starting to see the first wave of this, but I think this is going to be a huge trend for the next three to five years.
You mentioned UCS. Give us the elevator pitch, why should Cisco be one of my compute suppliers?
A lot of people scratched their heads and looked at this as yet another entry into the blade market. But I come back to virtualization being a very fundamental shift. We think the existing blade market did a nice job on what I would call mechanical innovations. Improving power, improving cooling, reducing cabling, etc. But there was an opportunity to take that a step further.
As you see the explosion of multi-core processors, the only way to take advantage of them effectively, without writing a lot of applications, is through virtualization. But the challenge becomes, what is the architecture of the I/O, the connection of those virtual machines on those servers to the network? That is basically a network problem and where we have provided some foundational innovation. The investment we made on Fibre Channel over Ethernet to converge the fabric with an industry standard approach, that was a key component of UCS. So, it fits into this entire network-based data center architecture that we have come up with.
Isn't it harder for you to enter computing than it is for the big computing guys to add networking?
It really depends on where you think the puck is going and if there is an innovation vector involved. If you believe that computing is Intel-based, commodity, white label things, you even question the value existing vendors bring. And maybe part of the market does go that direction. But if you look at what happened with blades in the last three to five years, people began to innovate in terms of the system architecture and, while we think that was a step in the right direction, we don't think they went far enough.
We believe you can design systems and products that work better together, that are based on industry standards, that provide value to the customer and diminish the total cost of ownership, diminish the need for integration services at the product level, and allow the customer to spend service money on services that help integrate those products and systems into business processes. They don't want to spend time and money just integrating 20 blades. They expect that to work out of the box. So that is the opportunity for Cisco.
I would contend that there are huge areas of networking that haven't even been scratched by the system vendors coming into this space. Where is your BGP support? What is your IPv6 strategy? How are you doing dealing with MPLS, VPLS? Oh, by the way, there are new standards coming on layer 2 to deal with virtual machine mobility. This is a complicated space, and our customers' networks range from modern day that were built in the last three years, through legacy networks that they built 10, 15 years ago and they are expecting technologies that can bring in, adapt, and migrate to over some period of time.
You say standards-based and when we asked HP about UCS recently they called it "a closed architecture with proprietary compute technologies."
Let's face it, all blade systems have been closed. IBM, HP, you buy the blade from that vendor and put it into their rack, right? And if you look at something like the HP C-series with virtual connect, even the network connection has been proprietary, and now you see HP respond with industry standard FCoE out of the ProCurve division. So what is the right architectural approach? It is posing a quandary for their customers.
Our architecture is based on a unified fabric. With a UCS system, you could take white label servers connected to FCoE, and have a consistent network-based architecture based on industry standards, IP, Ethernet. I don't think the system suppliers are used to competing in an open-based market. They have a model that requires certification of storage technologies, applications over their closed systems. IP has always been based on open systems. You plug in a NAS device, a camera into IP, you expect it to work, right, whether it is my switch or someone else's switches. This is a fundamental shift for the system suppliers.
Are there any plans to support third-party blades within the UCS enclosure?
Nothing I can announce today.
The question here is, is this vertical integration of services the appropriate thing for customers? What we hear from customers is a concern that they might not be getting the best technologies. We believe in moving through partners that can provide a great deal of focus on specific customers, specific verticals, be very intimate with their business needs, and take our products and leverage them to drive a higher degree of business value.
Are you significantly increasing your investment or staff in professional services?
We have a great deal of focus around Nexus and UCS in terms of that migration, and a lot of investments, not just in our own services, but partner enablement, which I think is a fundamentally different approach.
It's hard not to talk about all this without bringing up cloud computing, and John Chambers has called security in cloud computing a nightmare. Do you have any broad architectural initiatives under way to address this?
Absolutely. We look at new challenges that come with transitions as an opportunity to innovate. In the physical world, what people do is bind their network configuration to their security solution. In clouds the binding of security policy has to be much more dynamic, be able to recognize application mobility, and we think that fundamentally fits into a network type of approach. If you think about routing, this is a distributed configuration type of problem. This challenge around mobility and security lends itself to a network-based approach.
So we will see an architecture strategy from Cisco specifically for securing that cloud and virtualized environment?
Absolutely. Not only the challenges of securing it, but balancing load across data centers and making the network aware of application mobility. It is a huge architectural challenge. And tying this back to the question about competing on price, unless you are investing R&D on these new challenges, you won't be able to sustain yourself in the switching business. We've seen multiple of these transitions over the years.
How about on the disaster recovery side, does cloud computing and a mobile VM environment make that more challenging?
If it can all be made to work, the technologies offer tremendous advantages. A lot of [disaster-recovery] environments today are designed for pure fail over, and the state of both data centers has to be maintained. Networks have to be configured the same, servers configured the same, so applications can just boot over. With virtualization, the hardware is abstracted in both data centers so it could be much more simple.
The advantage of a virtualized environment is from the application perspective. If you are looking out from the server, things look the same. The view of your network, the view of your storage resources look consistent on old and new technologies. The challenge has been that, if you are looking from the network in, you have no visibility into these virtual machines. You can't see them. You see the physical port. What we have done with the Nexus architecture is to provide transparency at the virtual machine level, and that is fundamental if you are going to move into this new paradigm.
Regarding another data center topic, 10G, where would you say we are in the migration at this point?
In the data center the adoption of 10 gig between server access switches and either the data center core or distribution switches is happening in full force. As people start to focus on collaboration technologies, multimedia and video moving to the desktop, we are beginning to see links between access switches and the campus core or distribution switches migrate from gigabit Ethernet to 10G. I wager the 10G market is going to pick up.
We are also starting to see interest in the new 10 gig to the server technologies, and that is another area where we'll see a lot of growth in the next two to three years.
Do you have a rule of thumb about how many ports of Gig-E you need before it makes sense to upgrade to 10 gig?
There is probably a cabling cost trade off that is somewhere around 4 Gig-E. The other dynamic I would point to is the collapsing of block-based Fibre Channel networks over 10G using Fibre Channel over Ethernet (FCoE). This whole concept of unified fabric is important, not only from a cost perspective, but an architectural perspective, and what I mean by that is, if you have unified I/O in your data center, any application anywhere can support file based storage or block-based storage, intrinsically.
If you converge your I/O at the compute resource you can save about 8% of the power in a data center. It might not sound like a lot but customers tell us the network only consumes 15% to 20% of the power in any given center, so it is a significant reduction. In our own environment, when we moved to unified I/O, we reduced about 4,800 cables. The reduction of waste, the inherent reliability gain of reducing that cabling, is very significant.
How do you see FCoE ramping up?
The standard was ratified in June and industry support has been accelerating. There are two legs to this adoption journey. The first leg gets you 75% to 80% of the benefit of converging Fibre Channel over Ethernet between the server and the access switch. Our architecture connects servers to our Nexus 5000 access switch which in turn splits the Fibre Channel traffic to the traditional SAN devices through our MDS, and the IP traffic through our Nexus 7000. So, the majority of the benefit - the cabling benefit I just spoke of, the power reduction - all is in that first connection from the server to the switch.
The second leg is for the many customers that will take some time before they upgrade their arrays and their disk storage from native Fibre Channel to FCoE. This architecture allows them to have a staged migration strategy, but gets huge benefits on day one.
What role does MDS play in that migration? How do you migrate the installed base to FCoE while protecting the customer's investments in MDS?
As I said earlier, servers will be the first to move towards FCoE. Since not all customers will upgrade everything at once, they can then connect their existing MDS to our Nexus 5000 and keep their legacy storage arrays intact. So, MDS plays a critical role in maintaining the Fibre Channel storage-area networks in an enterprise, allowing customers to keep their existing Fibre Channel investment. The other thing unique about MDS is our platform strategy. Rather than upgrading to a new chassis every time we move from two to four to eight gig Fibre Channel, we can upgrade the existing chassis and protect that installed base. This is a very unique approach from what we see in the industry, and is something that has served us well in the IP world.
How much of that storage connectivity in MDS will be subsumed eventually by the Nexus line?
I think you're poking at the migration time frame from Fibre Channel to FCoE. It is tough to predict how quickly this will migrate, but I do see the growth of that overall market in the FCoE component.
Will the Nexus eventually assume all of the capabilities of the MDS line?
There will be customers who will look for pure Fibre Channel connectivity that we'll provide in the MDS, and folks who will look for FCoE as we have provided in the Nexus 5000.
So, will MDS be around as long as native Fibre Channel is around?
We're not going to predict life spans, but the exact same approach we took to the 6500, in terms of migration of incremental speed and incremental capability, that is our fundamental strategy on MDS, being able to continue to upgrade that product without chassis replacement. I think that is going to be important as you start to see these storage technologies collapse on IP
Rumor has it that the secret Alpine Project is a joint venture between Cisco and EMC to develop integrated compute/storage products and services for data centers. Can you discuss that?
No, I can't.
Can you confirm that it exists?
No, I can't. I can't comment.
Do you have any plans to integrate a storage array directly in UCS?
There is nothing to announce today on storage.
Great. Anything in closing here that you would like to get on the table that we didn't happen to address?
As we have gotten into a large number of market adjacencies, people have questioned whether we've lost our focus. I want to set the record straight: the foundation of our business is around routing and switching. As we get into things like connected real estate and digital signage, physical security, you look at these multi-million dollar opportunities and switching and routing is the core of everything we do and, fundamentally, we believe switching and routing is an innovation proposition. It is continuing to change. We have driven a lot of those transitions over a long period of time, and we feel we are extremely well positioned to continue that in the future.
What about the Flip digital camcorder? How does buying the company that makes that fit into the vision?
One of things you have to do as a market leader is drive the vision of how your technology is going to be used. I think people were scratching their heads when we introduced Tele-Presence. Now, as you see wide deployments of TP, people understand the connection. If you look at the Flip, how video can be used on a personal basis, how you can connect it to enterprise Web 2.0 technologies and use that as a communication medium, I predict you will see more personal use of video as collaboration starts to build out, but somebody has to drive it.
Are you saying that with a straight face?
I am. I really am.
Alright. We can see that technology driving traffic demands, but why you need to be in that business is beyond me, but we'll leave it at that.