The eternal challenge of the data center is balancing the quality of the user experience against the economics of building and operating the data center. One of the most significant trends today is the use of infrastructure virtualization to improve the economics.
Specifically, by virtualizing, pooling, and sharing compute, storage, and network resources across multiple applications and users (a.k.a. "building clouds"), the average utilization can be dramatically improved. Virtualizing servers may improve average utilization from 5% to as much as 60% or 70%. Resource pooling may also increase the agility of the business. New applications or services can be brought online in minutes or hours rather than the weeks or months that are normally required when new physical infrastructure needs to be ordered and installed. And as user demand shifts, resources can be dynamically reallocated to maintain a high quality of user experience.
One of the underlying realities of clouds or shared resource pools is that larger clouds and more dynamic clouds enable greater efficiency and flexibility. Unfortunately the desire for greater scale and dynamism places significant pressure on the data center network. It is the network or networks that interconnect the resources and enable the creation of a cloud. No network, no cloud. Unfortunately, networks are also the greatest impediment to success, as most networks today are not architected to support the scale or dynamism demanded by the modern data center. Only by re-architecting the data center network can we realize the full benefits of virtualization and cloud computing.
The typical architecture for a data center Ethernet network employs a tree structure, arrayed in three layers of switching, fanning out from the core. This design was derived from the architecture of the LAN and was adopted in data centers approximately 10 to 15 years ago when Ethernet displaced SNA, token ring and DEC-Net. As the data center evolved over the last 10 years, the network began to exhibit shortcomings, including suboptimal performance, inherent inefficiency, and excessive complexity. None of these shortcomings is trivial, but it is the complexity that truly stands in the way of the virtualized data center.
The complexity arises from the fundamental architecture of the tree structure. Networks are comprised of multiple, autonomous switching devices that cooperate through shared protocols. Managing a network entails managing not only the switches but also the interactions between the switches.
As a network scales up in capacity, the number of switches in the network scales in a relatively linear fashion. However, the potential interactions between the switches scale up dramatically, as a function of the square of the number of switches. This can be expressed using the formula i = n*(n-1)/2, where i is the number of potential interactions, and n is the number of managed devices. This geometric increase in the number of interactions to be managed drives the complexity of the network, inhibiting the benefits of virtualization.
Complexity limits scalability. As the data center network gains ports, connected devices, and traffic, the managerial complexity increases geometrically to the point where large, Layer 2 network domains become impractical to manage. To limit complexity, the network in many data centers is physically divided into multiple segments. Unfortunately, this runs counter to the desire to build fewer, larger resource pools. Also, complexity inhibits change, or the ability to rapidly reconfigure the network, which limits dynamism.
Applying Occam's Razor
The key to overcoming network obstacles is to simplify the network. However, simplification cannot be achieved by cosmetic changes; it requires rethinking the network architecturally. Here are five ways to do that:
1. Reduce the number of physical networks to the smallest number possible, preferably one. For most companies, this begins with converging the multiple Ethernet networks to one physical network. The new DCB (Data Center Bridging) capability, combined with VLAN, allow for separation and prioritization of traffic. Also, if practical, converge the storage traffic onto the Ethernet network, which can be achieved using NAS, iSCSI, or FCoE.
2. Flatten the physical network. Most data center networks today are built with three layers of switching. The latest technologies allow for the elimination of the aggregation layer, reducing the network to two layers without impacting the ability to scale connectivity. Flattening the network reduces the number of switches and interactions to be managed, significantly reducing complexity while improving performance and reducing cost.
3. Move towards a "network fabric" architecture. One of the key aspects of a network fabric architecture, in contrast with the traditional tree architecture, is that all the elements in the fabric are controlled by a single control plane in the same manner that all the ports in a single switch are controlled by a single control plane. Essentially a network fabric is a set of separate physical elements that behave like a single, logical, distributed switch.
Also, fabrics interact with dynamic resource pools naturally. That is, when a virtual port (the interface between a VM and the network) is defined or configured in a fabric, every physical port in the network shares the knowledge of that configuration. If a VM migrates across a fabric, the virtual port configuration is automatically preserved. The VM's connections to its other network connected resources are retained. This would include storage, load balancers, security appliances, and edge services (routers). It also automatically preserves traffic separation, including ACL, VLAN, and policy definitions. This natural behavior enhances the dynamism of the pool.
4. Design for a single point of management. Work towards an architecture that presents the management of the virtual and physical network as a single "task" from a single automation tool. This would eliminate many of the typical configuration errors that arise when separate people, using separate tools or interfaces, are responsible for configuring the virtual and physical switches. When these configurations are not in sync, problems could arise during the initial provisioning of a VM or during the migration of the VM.
5. Plan for VEPA. Virtual Ethernet Port Aggregation (VEPA, 802.1 Qbg) is an evolving, open standard that will enhance the network's interactions with virtualized servers. It will take the switching out of the hypervisor and rely solely on the physical network. This would allow the virtual switch to act as a pass-through device, eliminating an entire layer of switching and dramatically reducing the number of switches and interactions that must be actively managed. It also promises to enhance network signaling in support of VM migration, for improved dynamism. Assuming you have VEPA support in the hypervisor and the switches, VEPA would allow you to seamlessly migrate VMs across the entire network.
To realize the full benefits of virtualization, and enable more scalable, more dynamic resource pools, we need to re-architect the data center network with an eye towards eliminating its inherent complexity. By converging on a single network, flattening the network, reducing the number of separate switches and switch interactions, leveraging the single control plane of a network fabric, leveraging state-of-the-art automation, and implementing emerging open standards like VEPA, we can simplify the network, uncapping the potential of the modern data center.
Andy Ingram has more than 28 years of experience in the high-tech industry bringing ground-breaking technology to market. In his current role, he is responsible for driving the marketing and go-to-market strategies for Juniper's Data Center Business Group. Andy joined Juniper in October 2008 from IGT, where he was the senior vice president of network systems. Prior to IGT, Andy held various senior management positions at Sun, Hewlett Packard, Cray Research and Sequent Computers, involved in the marketing and sales of servers, storage, system software, security products and application software. Andy holds an MBA from the Anderson School at UCLA and a bachelor degree from the University of Colorado.