- Google I/O 2013's Coolest Products and Services
- 10 Star Trek Technologies That are Almost Here
- 19 Generations of Computer Programmers
- 25 Must-Have Technologies for SMBs
Network World - Even though strides are being made to define standards for extending Ethernet to handle data center applications, these advances will not be a panacea, vendors say.
Indeed, proprietary extensions to those standards, which are being defined by the IEEE and Technical Committee T11 of the InterNational Committee for Information Technology Standards, will still be required in order to address customer requirements for data center-optimized Ethernet. Additionally, vendor marketing may confuse the issue even more as some have adopted different acronymic brands that essentially refer to the same technology.
"The biggest [snag] is, what do we call it?" says Steve Garrison, vice president of marketing for Force10 Networks, one of a group of vendors driving standards for Converged Enhanced Ethernet (CEE), an extended version of Ethernet for data center applications. Cisco participates in the CEE standards efforts, though refers to the technology as Data Center Ethernet (DCE).
"What customers really want right now is education," Garrison says. "Is this acronym proprietary? Is it a unified push among many vendors?"
CEE and DCE describe an enhanced Ethernet that will enable convergence of LAN, storage-area network (SAN) and high-performance computing applications in data centers onto a single Ethernet interconnect fabric. Currently, these applications have separate interconnect technologies, including Fibre Channel, Infiniband and Myrinet.
This forces users and server vendors to support multiple interconnects to attach servers to the various networks, a situation that is costly, energy and operationally inefficient, and difficult to manage. So many in the industry -- Brocade, EMC, NetApp, Emulex, Fujitsu, IBM, Intel, Sun Microsystems and Woven Systems, in addition to Cisco and Force10 -- are proposing Ethernet as a single, unified interconnect fabric for the data center due to its ubiquity, familiarity, cost and speed advances: 10Gbps now, eventually increasing to 40G and 100Gbps.
But in its current state, Ethernet is not optimized to provide the service required for storage and high-performance computing traffic -- speed alone won't cut it, vendors say. Ethernet, which drops packets when traffic congestion occurs, needs to evolve into a low latency, "lossless" transport technology with congestion management and flow control features, CEE and DCE backers say.
"You need to make sure Ethernet will behave in the same way as Fibre Channel itself," says Claudio DeSanti, a technical leader in Cisco's Storage Technology group. DeSanti is vice chair of T11 and technical editor of the IEEE’s 802.1Qbb priority-based flow control project within the Data Center Bridging (DCB) task group.
T11's FCoE defines the mapping of Fibre Channel frames over Ethernet so storage traffic can be converged onto a 10Gbps Ethernet network. The IEEE’s DCB task force is defining three standards -- 802.1Qau for congestion notification, Qaz for enhanced transmission selection, and Qbb for priority-based flow control.