Cisco Subnet An independent Cisco community View more

Trill? SPB? FabricPath? QFabric? Flat Network Confusion!

Geeky discussions are lost on IT and networking professionals

Few people in our industry would debate the fact that data center consolidation, server virtualization, and storage-over-Ethernet are changing data center network architectures. There is also general consensus around the concept of more "flat" data center networks providing flow-based, non-blocking, shortest path, network fabrics. Unfortunately, this is where technical harmony meets industry spin. Talk to engineers or academics, and your likely to hear a debate about the merits of IEEE's Shortest Path Bridging (SPB) and the IETF's Transparent Interconnect of Lots of Links (TRILL). Cisco, the big kahuna of networking, currently pays lip-service to Trill by offering Trill-like functionality (Note: Many engineers believe that Cisco's Trill-like functionality is much closer to SPB than Trill) as part of FabricPath but Cisco has something called "Jawbreaker" on the horizon which may steer the company toward a more proprietary offering (note: Cisco has been quiet on Jawbreaker so I am admittedly guessing here). Juniper has openly shun both SPB and Trill (for now) and is pushing its proprietary QFabric. Ditto for Brocade with its Virtual Cluster Switching (VCS). HP says it will support both standards (which seems like a stall tactic to me until one or the other standard wins but again this is just my opinion). Attention networking industry: This techno-geek debate is only confusing your customers and prospects. Most organizations don't care how you flatten the network, they simply want something that works and supports their business processes. Yes, I know that this is an over simplification and they also want low-latency, a converged fabric, better scale, ease-of-management, etc. That said, technical debates and acronym soup aren't helping your customers achieve -- or even plan for -- these goals. A couple of other thoughts here: 1. This technical mumbo-jumbo plays right into the hands of Cisco. Why? It already owns the customer and Cisco knows how to sell. When the industry offers confusion, long-time Cisco customers may tend to stick to their comfort zone. Said another way, it is far easier for Cisco to upgrade an existing customer than it is for a competitor to convince them (with technical and confusing rhetoric) to go in another direction. 2. By eschewing SPB and Trill, Juniper is taking a bit of a risk but technical transition plays to Juniper's innovative strength. Big data centers are full of the geekiest of geeks who are most likely to be enamored by Juniper's QFabric innovative story (Note: Arista fits here as well). 3. I really like Extreme Network's position in this industry discussion. Rather than stick its neck out with Trill or SPB, Extreme is talking to customers about Multi-Switch Link Aggregation (M-LAG) which addresses historical Spanning Tree limitations by creating active/active network paths for load balancing and redundancy. In this way, Extreme (and others like Arista and Force 10) and bringing the discussion back to what networking functions can be used today rather than dwell on what to expect 18-24 months from now. One note of clarification: Extreme is not saying that it is going with M-LAG rather than SPB or Trill. Like HP, Extreme is letting the standards smoke (and mirrors in this case) clear. 4. Regardless of whether SPB or Trill wins, data center fabric functionality will ultimately require new hardware from most vendors. I know Enterasys is SPB-ready meaning that it will offer a software upgrade for SPB on existing hardware. I'm sure others can do the same albeit from their latest hardware only. 5. Aside from confusion about future standards and technologies, most network engineers I talk to have no idea how to get there from where they are today. Eliminate network tiers? Learn L2 routing protocols? Ultimately, professional services vendors like Unisys, CSC, and IBM Global Services may end up raking in the early profits from next-generation data center network advances.

Editors' Picks
Join the discussion
Be the first to comment on this article. Our Commenting Policies