Ethernet fabric switching for next-generation data centers

Traditional approaches to data center networking cannot satisfy the scale, bandwidth, latency and cost points required for evolving data center software architectures.

Fibre Channel and InfiniBand remain costly due to the specialized knowledge needed to implement, tune and administer, while traditional Ethernet switches fail to meet the stringent new demands. The best alternative is Ethernet fabric solutions designed specifically for the modern data center.

An Ethernet fabric provides a low-cost, high-speed, ultra-low-latency, loss-less and scalable network infrastructure. It can offer a full cross-sectional interconnect for 1 Gigabit Ethernet (GE) or 10 GE attached servers, and allows the compute and storage servers to maximize aggregate compute power in a cost effective way. Key attributes are:

• Scalability to hundreds of 10 GE ports and thousands of 1 GE ports.

• Nonblocking (wirespeed) full cross-sectional bandwidth.

• Loss-less packet delivery.

• Ultra-low latency (sub-six-microsecond latency across a multidevice core).

Conventional Ethernet networks consist of a multitier hierarchy, with bandwidth oversubscription in every tier, as shown in Figure 1. These networks are typically designed for Fast Ethernet and 1GB servers and clients, where 10GB is often used for interconnecting the Ethernet switches.

Diagram showing overcoming Ethernet limitations in the data center

Spanning tree protocols are used for loop avoidance and active-standby resiliency. While spanning tree accommodates loop-free Layer-2 topology, it reduces the overall bandwidth efficiency of the network because of the lack of readily available alternate paths. The reason is that the time needed to reconstruct the tree, even with the so-called rapid spanning tree protocol, takes many seconds rather than the few milliseconds required. Therefore, conventional Ethernet switches are not optimized for high-density 10GB interfaces.

Ethernet fabrics, on the other hand, are based on a Clos architecture, also known as a fat-tree topology. Unlike traditional multitier Ethernet networks that oversubscribe 1GB or 10GB links to interconnect switches, Ethernet fabrics require every switching tier to be connected to the next tier closer to the root with higher aggregate bandwidth and without any over-subscription, guaranteeing a nonblocking switch fabric.

While this can be easily implemented for 1GB links, it becomes challenging for 10GB links. For 10GB server links the aggregate capacity facing the root needs to be accommodated by a group of 10GB links. In order to assure nonblocking bandwidth, these groups of 10GB links cannot be blocked as in conventional enterprise Ethernet topologies.

So Ethernet fabrics use high-capacity nonblocking 10GB switching nodes and employ Layer-2 multipath technology where all link capacity is available simultaneously to construct the fat-tree topology. Significantly, these switches utilize cut-through technology to maintain ultra-low end-to-end latency. As a result, this architecture enables full cross-sectional bandwidth through the Ethernet fabric, and the ability to scale nonblocking bandwidth with low latency is well beyond what is possible using today’s traditional approach.

With an Ethernet fabric, shown in Figure 2, traffic flows are distributed across multiple Layer 2 paths based on traffic load for each path to maximize the fabric’s bandwidth efficiency. The Layer 2 multipath technology in an Ethernet fabric distributes traffic on available paths using a dynamic rebalancing technique that assures optimal use of the aggregate bandwidth while guaranteeing ultra-low latency for all the traffic flows.

The fabric not only dynamically rebalances traffic, it also avoids congestion. Congestion can be caused by external traffic load distribution or random traffic patterns where multiple sources send traffic simultaneously to the same destination. Congestion is detected and avoided inside the fabric using a dynamic avoidance technique (based on real-time, one-way latency plus jitter measurements) that detects congestion on any path and dynamically redirects the flow to an alternate congestion-free path.

In the case of congestion at the destination, the Ethernet fabric can notify the source using standard Ethernet pause frames at the ingress port, thereby eliminating packet drops at the egress. As a result, the fabric can guarantee loss-less packet delivery. In contrast, rather than pausing the source of the traffic, traditional switches will pause neighboring switches creating a "congestion-tree" in the network infrastructure.

In addition to better satisfying the high nonblocking throughput and low latency required in the data center, Ethernet fabrics are also more cost effective than traditional Ethernet switches in large-scale deployments. A fabric’s cost remains relatively flat, whereas traditional switch costs rise dramatically when either bandwidth is increased while keeping the number of servers constant, or when the number of servers is increased but the level of bandwidth is held constant.

To deliver the low-latency, nonblocking throughput required in the data center, the Ethernet fabric must be able to establish multiple, alternate paths that can be utilized in real time to eliminate congestion. The best fabrics are able to perform this feat by dynamically selecting the path with the lowest latency and jitter without dropping or reordering any packets using cut-through switching. Anything less is just an ordinary Ethernet switch.

Ammirato is vice president of marketing for Woven Systems (www.wovensystems.com).

From CSO: 7 security mistakes people make with their mobile device
Join the discussion
Be the first to comment on this article. Our Commenting Policies