Clos networks were first created in the mid-1950s as a method to switch telephone calls. Clos networks evolved into crossbar topologies and eventually into chassis-based Ethernet switches using a crossbar switching fabric. Now Clos networks are being used in modern data center networking architectures to achieve high performance and resiliency. This concept has been around for many years and it is now a key architectural model for data center networking. It is fascinating how concepts reemerge again and again in the history of networking.
Origin of the Clos Network
Charles Clos was a researcher at Bell Laboratories in the 1950s. He published a paper titled "A Study of Non-blocking Switching Networks" in the Bell System Technical Journal in 1953. In this paper he described how telephone calls could be switched with equipment that used multiple stages of interconnection to allow the calls to be completed. The switching points in the topology are called crossbar switches. Clos networks were designed to be a three-stage architecture, an ingress stage, a middle stage, and an egress stage. The concept is that there are multiple paths for the call to be switched through the network so that calls will always be connected and not "blocked" by another call. The term fabric came about later because the pattern of links looks like threads in a woven piece of cloth.
Clos Network within Network Switches
Clos networks made a reappearance many years later in the 1990s when early Ethernet switches were being developed. In order to create connectivity where any Ethernet interface on a switch could send Ethernet frames to any other interface on that switch, there needed to be a similar crossbar matrix of connectivity within the switch. The number of interfaces in the switch governed how large the crossbar fabric needed to be. When modular chassis-based network switches were developed, the crossbar switching fabric needed to grow to accommodate faster interface speeds. The crossbar fabric was provided by the supervisor module combined with the wiring between cards within the chassis.
Crossbar fabrics fell out of favor because they were subject to Head Of Line (HOL) blocking due to input queue limitations. Over time, Ethernet switches were developed that had input and output queues on all the interfaces. Modern Ethernet switches have more advanced fabric technologies, output queuing and priority-based flow control so they can now achieve non-blocking performance. With these technical enhancements, switches can now support guaranteed bandwidth connectivity for protocols like Fiber Channel over Ethernet (FCoE) using 10 Gigabit Ethernet links.
Data Center Switching Using Clos Architecture
Over the years, networks started to use the "fat tree" model of connectivity using the core - distribution - access architecture. In order to prevent oversubscription, the link speeds got progressively higher as you reached the core. For example, the access links to servers or desktops might have historically been 100Mbps Fast Ethernet links, the uplinks to the distribution switches might have been 1Gbps Ethernet links, and the uplinks from there to the core would have been 4X1Gbps port channels.
The problem with traditional networks built using the spanning-tree protocol or layer-3 routed core networks is that a single "best path" is chosen from a set of alternative paths. All data traffic takes that "best path" until the point that it gets congested then packets are dropped. The alternative paths are not utilized because they topology algorithm deemed them to be less desirable or removed to prevent loops from forming. There is a desire to migrate away from using spanning-tree while still maintaining a loop-free topology yet utilizing all the multiple redundant links. If we could use a method of Equal-Cost Multi-Path (ECMP) routing, then performance could increase and the network would have better resiliency in the event of a link failure or a single switch failure.
Clos networks have now made their second reappearance in modern data center switching topologies. However, this time, rather than being a fabric within a single device, the Clos network now manifests itself in the way that the switches are interconnected. Now data center networks are comprised of top-of-rack switches and core switches. The top of rack (ToR) switches are the leaf switches and they are attached to the core switches which represent the spine. The leaf switches are not connected to each other and spine switches only connect to the leaf switches (or an upstream core device). In this Spine-Leaf architecture, the number of uplinks from the leaf switch equals the number of spine switches. Similarly, the number of downlinks from the spike equal the number of leaf switches. The total number of connections is the number of leaf switches multiplied by the number of spine switches (in this diagram 8 X 4 = 32 links).
In this Clos topology, every lower-tier switch is connected to each of the top-tier switches in a full-mesh topology. If there isn't any oversubscription taking place between the lower-tier switches and their uplinks, then a non-blocking architecture can be achieved. The advantage of the Clos network is you can use a set of identical and inexpensive devices to create the tree and gain high performance and resilience that would otherwise cost must more to construct. To prevent any one uplink path from being chosen, the path is randomly chosen so that the traffic load is evenly distributed between the top-tier switches. If one of the top tier switches were to fail, it only slightly degrades performance through the data center.
Examples of Data Center Clos Networks
There are examples of Clos networks is many of the data center fabric architectures from switch manufacturers. Transparent Interconnect of Lots of Links (TRILL) is a layer-2 data center protocol that creates flat networks on top of a layer-3 routed network for the purposes of simplified server networking. TRILL allow for multiple paths to be used in a redundant Clos Network architecture and removes the need for spanning tree protocol and its blocked alternative links. Many vendors have implemented their own versions of TRILL.
Cisco's implementation of FabricPath is an extension of the TRILL standard. Cisco data center switches like Nexus 7000 switches are connected in a Clos network to Nexus 5000 and/or Nexus 2000 switches and FabricPath can be run within that data center and to connect to other data centers.
Juniper's QFabric System is actually not TRILL-based, but instead utilizes an interior fabric protocol developed by Juniper that is based on IEEE RFC1142, otherwise known as the IS-IS routing protocol. QFabric Nodes are interconnected to form a fabric that can utilize multiple redundant uplinks for greater performance and reliability.
Brocade Virtual Cluster Switching (VCS) Fabric is their implementation of the TRILL standard that allows for a Clos network topology to utilize multiple link.
Arista Spline architecture where the terms leaf and spine are combined into a new word that represents a that uses a single tier. At first, you may think that the term "Spline" is related to a method of connecting mechanical parts or some type of mathematics.
Summary
If you have been in the IT and networking industry for more than a decade, you have probably seen different concept evolve, peak, die, and then become resurrected into some new technology. We saw Token Ring networks reform into FDDI and then reappear in topologies. We have all witnessed how centralized mainframes evolved into distributed computing and now server consolidation and virtualizations have brought back computing to centralized data centers and then back into cloud computing. Clos networks is one of those enduring concepts that we will undoubtedly see again and again in the evolution of networking technologies.
Scott