Cisco and other big names are behind efforts to improve Ethernet for use in data centers. Meanwhile, pre-standard technologies have started to roll out -- along with potentially confusing marketing campaigns.
Even though strides are being made to define standards for extending Ethernet to handle data center applications, these advances will not be a panacea, vendors say.
Indeed, proprietary extensions to those standards, which are being defined by the IEEE and Technical Committee T11 of the InterNational Committee for Information Technology Standards, will still be required in order to address customer requirements for data center-optimized Ethernet. Additionally, vendor marketing may confuse the issue even more as some have adopted different acronymic brands that essentially refer to the same technology.
"The biggest [snag] is, what do we call it?" says Steve Garrison, vice president of marketing for Force10 Networks, one of a group of vendors driving standards for Converged Enhanced Ethernet (CEE), an extended version of Ethernet for data center applications. Cisco participates in the CEE standards efforts, though refers to the technology as Data Center Ethernet (DCE).
"What customers really want right now is education," Garrison says. "Is this acronym proprietary? Is it a unified push among many vendors?"
A new kind of Ethernet
CEE and DCE describe an enhanced Ethernet that will enable convergence of LAN, storage-area network (SAN) and high-performance computing applications in data centers onto a single Ethernet interconnect fabric. Currently, these applications have separate interconnect technologies, including Fibre Channel, Infiniband and Myrinet.
This forces users and server vendors to support multiple interconnects to attach servers to the various networks, a situation that is costly, energy and operationally inefficient, and difficult to manage. So many in the industry -- Brocade, EMC, NetApp, Emulex, Fujitsu, IBM, Intel, Sun Microsystems and Woven Systems, in addition to Cisco and Force10 -- are proposing Ethernet as a single, unified interconnect fabric for the data center due to its ubiquity, familiarity, cost and speed advances: 10Gbps now, eventually increasing to 40G and 100Gbps.
But in its current state, Ethernet is not optimized to provide the service required for storage and high-performance computing traffic -- speed alone won't cut it, vendors say. Ethernet, which drops packets when traffic congestion occurs, needs to evolve into a low latency, "lossless" transport technology with congestion management and flow control features, CEE and DCE backers say.
"You need to make sure Ethernet will behave in the same way as Fibre Channel itself," says Claudio DeSanti, a technical leader in Cisco's Storage Technology group. DeSanti is vice chair of T11 and technical editor of the IEEE’s 802.1Qbb priority-based flow control project within the Data Center Bridging (DCB) task group.
T11's FCoE defines the mapping of Fibre Channel frames over Ethernet so storage traffic can be converged onto a 10Gbps Ethernet network. The IEEE’s DCB task force is defining three standards -- 802.1Qau for congestion notification, Qaz for enhanced transmission selection, and Qbb for priority-based flow control.
Where Ethernet standards fall short
Vendors say these standards should be solid enough to implement on products and deploy in data centers in late 2009/early 2010. The DCB standards will be final in March 2010, four months later than initially planned due to some outstanding, but not insurmountable issues, according to Pat Thaler, chair of the DCB Task Group in the IEEE.
But some leading-edge customer need a pre-standard lossless Ethernet implementation now, vendors say; and even when these standards are complete they will be incomplete, others say.
"A particular area where we feel these standards don't really address is the avoidance of congestion -- primarily with respect to load balancing traffic first before we rate limit traffic at the source," says Bert Tanaka, vice president of engineering for Woven Systems. “Qau and Qbb attempt to avoid congestion by slowing traffic from the source. But what we feel that they don’t do is they don't actually try to avoid congestion by balancing traffic in the fabric. That is where we plan to couple [the standards] with our own technology."
Tanaka also says the DCB and FCoE standards are limited in the ability to scale to large networks.
"They are really targeted for a fairly small fabric -- maybe hundreds of nodes," he says. "But if you’re trying to scale to multiple hops and larger fabrics, it's not clear it would scale to something like that. FCoE . . . is looking to a more constrained network size. It may not scale to the network the size of Google."
Thaler says no one ever proposed load balancing or congestion avoidance for inclusion in the DCB standards.
"I don't know any networks that have standardization for load balancing," she says. "Switch vendors like to keep that as their secret sauce."
And she disagrees with Tanaka's assertions that the standards will not scale: "I think that's (referring to) congestion notification but I don’t entirely agree with that.”"
Tanaka says Woven and other switch and host adapter vendors have implemented a pre-standard versions of Qbb in order to address the limitations of the standards as well as current market demand. He says Woven plans to comply with the standard once it is complete but also extend beyond it.
Separately, Cisco is shipping pre-standard DCE technology on its products, such as the new Nexus 7000 and 5000 data center switches, DeSanti says. He says these features can be made standard compliant with a firmware upgrade once the standards are complete.
"Consensus has been achieved on what the mechanisms are and how they should behave," he says. "So it is already possible to have products that will become standard compliant, even if the standard is still in the phase of construction."
The Qau standard, however, is "ambitious" and may not be necessary for initial implementations of DCE, DeSanti says. Nonstandard implementations of congestion notification may suffice.
Force10, however, has no intentions of shipping pre-standard CEE technology even though the University of New Hampshire already conducted an FCoE interoperability "plugfest." The company plans to fully comply with the T11 and DCB standards once they are solid, Garrison says.
Mass market demand won't bubble up until then, Garrison says.
"There still could be some hiccups and bugs that get discovered that have to be addressed so that's why we're waiting," he says.
Apart from the standards efforts, CEE and DCE may raise some operational challenges, according to Chuck Hollis, EMC's global marketing CTO. Hollis notes convergence might disrupt the usual data center setup in which three different groups are responsible for operating three distinct networks.
It isn't clear who manages a converged fabric, Hollis says in a blog post on the EMC site."In terms of organizational responsibility, we've got an entirely new construct, don't we?" Hollis asks. "I mean, today we've got separate disciplines and largely linear workflows between the groups. What happens when we can put it all on one console? And even if we can do it, will people want it?"
Nonetheless, CEE and DCE vendors are encouraged that they've agreed on the technologies to be included in the standards, and that major hurdles to finalizing them -- acronyms notwithstanding -- have been stamped out.
"I don’t see any show-stoppers here -- it's just time," says Force10’s Garrison. "This is just another evolutionary step. [Ethernet] worked great for mundane or typical applications -- now we're getting to time-sensitive [applications] and we need to have a little bit more congestion control in there."
-- Senior Editor Jon Brodkin contributed to this story.