Cisco Subnet An independent Cisco community View more

HP Networking finally rolls out a data center switch

After a long wait, HP re-ups its data center switch offerings.

The data center is where all the action has been in networking over the past few years. We saw the introduction of the network fabric, the rise of software defined networks (SDN), a number of startups emerge, and we’ve seen a fair bit of M&A activity as well. Because of the rapid evolution, we’ve seen almost every major network vendor – Cisco, Brocade, Juniper, Extreme, Avaya, Alcatel-Lucent and others – revamp the data center portfolio.

The one vendor that I thought was noticeably absent from the data center networking wars was HP. The company outlined its FlexFabric vision last year, but the only products it had to support the related architecture was the 10K, which is a campus switch, and the 12,500, which was great when H3C first released it, but was getting a bit old even when HP acquired H3C. Now, it’s clearly past its prime. The company has positioned the 12,500 as a data center switch and has beefed up the features set accordingly. The 12,500 now supports Ethernet Virtual Interface, SPB and other data center features. As of now, it's limited to 10 Gig-E, but HP has stated that 40/100 Gig-E will be available later this year. Despite the added features, though, HP is the vendor I get the least amount of inquiry on regarding data center networking.

[MORE HP: What the man behind HP's new internal IT plan has in mind]

[How much tech's top CEOs get paid]

Today, HP Networking finally released a set of switches aligned with current data center trends. Specifically, the company announced:

  • FlexFabric 5,900 top of rack switch. HP has added EVP and VEPA to their existing line of 5900 top of rack switches to extend advanced networking features to the hypervisor. The 5900 is a full-layer 2/3, low-latency switch and is a strong addition to the line of ToR switches.
  • FlexFabric 11,908 data center aggregation switch. I thought this was an interesting product as the company chose to implement both TRILL and SPB. It also supports FCoE and DCB, can scale up to 64 40-Gig ports, and is the first switch I know of to support OpenFlow 1.3.
  • FlexFabric 12,900 data center core switch. There are two variants of this, a 12916 and 12910. This switch is made for speed as it has a capacity of 36 Tbps and can scale up to 256 40 Gig-E ports.
  • HSR 6800 router. When I saw this in the press deck, it surprised me a little as I’ve never really thought of HP as a company that understands routing. The 6800 router is almost a carrier router with a 2-Tbps backplane and can support 32 10-Gig-E router ports.

From what I understand, the H3C business unit that HP acquired a few years back built these products. About a month ago, I ran across this page from the H3C website.

If you cut and paste the text into Google translate, it actually does a pretty good job, and the H3C 12,516-X appears to be the 12,916. The product is also referenced on this site, where it was run through some tests.

The reason I bring these up is because they somewhat answer the question of why HP Networking has been so absent in data center networking over the past few years. They weren’t really absent; the company just released the product into the China market first. If this seems odd to you, it shouldn’t. Why does Cisco release all of its products into the U.S. first? It’s Cisco’s home market. Anything that comes out of H3C will be released into its home market, which is China. This should also give customers some sense that the product has been bug tested and used in production environments.

While I thought HP did a nice job with these products, it does bring up a couple of questions. First, what does this mean for the older H3C products?Particularly, the 12,500? If the 10,500 is the campus switch and these are the new data center products, it brings into question how customers should position the 12,500 versus the 12,900. I may be wrong about this, but I believe the older-generation H3C switches were Marvell-based, and these are Broadcom, meaning there may not be any line-card compatibility between the generations. HP has informed me that the 12,500 will remain its enterprise data center core switch, and the 12,900 is more for large-scale cloud deployments. This seems logical on paper but maintaining multiple data center products is expensive and can be difficult to manage. HP has a huge services organization which should help with the positioning of one versus the other, although transitioning customers can be a challenge.

Also, what does this mean for the future of the old ProCurve line? I’m assuming that remains HP’s solution at the access edge, but the unified wired/wireless switch and IMC management tool came from H3C as well. From some of the resellers I’ve talked to, the only reason that ProCurve hasn’t been put out to pasture is because of the lifetime warranty on the products.

The other question that comes to mind is how the company catches up to its nemesis Cisco. The company put in its briefing deck the comparison points versus Nexus, but Cisco’s data center go-to-market revolves around the integration of Nexus and UCS, not just the network. While HP also sells severs, they still don’t have a unified network/server story. I’ve talked to a number of Cisco customers that rave about the UCS service profiles that make rapid provisioning of data center resources a simple, repeatable process. HP appears to have the building blocks to do this, I’d just like to better understand how the H3C-built IMC tool interoperates with HP’s compute infrastructure.

This announcement certainly filled the data center gap HP Networking had with its product, but as we’ve seen over and over again, success in the data center requires more than just beefy switches.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Must read: Hidden Cause of Slow Internet and how to fix it
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.