The arrival of Software Defined Networking (SDN), which is often talked about as a game changing technology, is pitting two industry kingpins and former allies against each other: Cisco and VMware.
Although the companies are coming at SDN from different directions, their software defined aspirations virtually guarantee confrontation. So now that both have laid their SDN cards on the table, it’s time to compare and contrast their approaches.
VMware jumped on SDN early with the $1.2 billion acquisition of startup Nicira mid- 2012. Nicira’s network virtualization strategy fit well into VMware’s overall product set, allowing for a tight coupling with products such as vSphere.
Just over a year after the Nicira acquisition, VMware announced its network virtualization platform called NSX in August 2013. VMware customers who want to take their data center virtualization strategy to the next level can now look to a vendor they’ve trusted for their core virtualization needs.
Network giant Cisco was slow to condone the SDN movement, probably because it has the most to lose from the arrival of SDN given the technology promises to pry the network smarts out of packet handling equipment and centralize it in controllers. In fact, Cisco’s SDN strategy had been muddy for almost two years. Although the company rolled out various products and initiatives under an SDN umbrella, there was nothing that felt like a cohesive strategy that customers could get a hold of – until now.
+ ALSO ON NETWORK WORLD Cisco has this SDN thing covered | Understanding SDN+
With the announcement of Application Centric Infrastructure (ACI) in November, Cisco has finally revealed what it believes SDN should look like. Spending $863 million to acquire Insieme Networks, which it funded as a “spin-in” startup, Cisco has unleashed a full-court press to evangelize ACI to the masses.
So how do VMware NSX and Cisco ACI line up and where do they fall in the emerging SDN ecosystem of products? To find out we’ll take a deeper dive into each product, exploring the key elements of what they do, how they do it, and what that means to customers.
VMware NSX
Brad Hedlund, VMware engineering architect, described the goal of NSX succinctly: “We want you to be able to deploy a virtual network for an application at the same speed and operational efficiency that you can deploy a virtual machine.”
NSX tackles this lofty goal by provisioning hypervisor virtual switches to meet an application’s connectivity and security needs. Virtual switches are connected to each other across the physical network using an overlay network, which is no mean feat.
So how does VMware accomplish this? There are several key elements, all of which revolve around a distributed virtual switch (vSwitch).
Sitting at the network edge in the hypervisor, the vSwitch handles links between local virtual machines. If a connection to a remote resource is required, the vSwitch provides access to the physical network. More than just a simple bridge, the NSX vSwitch is also a router, and if needed, a firewall.
If the vSwitch is the heart of the NSX solution, the NSX controller is the brain. Familiar in concept to those who are comfortable with SDN architectures, the NSX controller is the arbiter of applications and the network. The controller uses northbound APIs to talk to applications, which express their needs, and the controller programs all of the vSwitches under NSX control in a southbound direction to meet those needs. The controller can talk OpenFlow for those southbound links, but OpenFlow is not the only part of the solution, or even a key one. In fact, VMware de-emphasizes OpenFlow in general.
With NSX, the controller could run as a redundant cluster of virtual machines in a pure vSphere environment, or in physical appliances for customers with mixed hypervisors.
A distributed firewall is another key part of NSX. In the NSX model, security is done at the network edge in the vSwitch. Policy for this distributed firewall is managed centrally. Conceptually, the NSX distributed firewall is like having many small firewalls, but without the burden of maintaining many small firewall policies.
Creating the virtual network segments are overlay protocols. VMware’s choice to support multi-hypervisor environments means they also support multiple overlays. Supporting Virtual eXtensible LAN (VXLAN), Stateless Transport Tunneling (STT) and Generic Routing Encapsulation (GRE), NSX builds a virtual network by taking traditional Ethernet frames and encapsulating (tunneling) them inside of an overlay packet. Each overlay packet is labeled with a unique identifier that defines the virtual network segment.
Of course, not all networks would know what to do with NSX-defined virtual networks. To connect non-NSX networks to NSX environments and vice-versa, traffic passes through an NSX gateway, described by VMware as the “on ramp/off ramp” into or out of logical networks.
Multi-hypervisor support is an important part of the NSX strategy, adding, as it does, Citrix Xen and KVM users to the mix. In fact, NSX is agnostic to many environment elements, including network hardware, which is an important attribute. From a network engineering perspective, this is critical to understand.
Hedlund put it this way: “When you put NSX into the picture with network virtualization, you’re separating the virtual infrastructure from the physical topology. With the decoupling and the tunneling between hypervisors, you don’t necessarily need to have Layer 2 between all of your racks and all of your VMs. You just need to have IP connectivity. You could keep a Layer 2 network if that’s how you like to build. You could build a Layer 3 fabric with a Layer 3 top of rack switch connected to a Layer 3 core switch providing a scale-out, robust, ECMP IP forwarding fabric. Now the Layer 2 adjacencies, the logical switching and the routing is all provided by the programmable vSwitch in the hypervisor.”
In other words, the network hardware does not have to use MPLS, 802.1q VLANs, VRFs, or other network abstractions to create securely separated, multi-tenant networks. Instead, the NSX controlled vSwitch handles this by tunneling hypervisor-to-hypervisor traffic in an overlay. The underlying network’s responsibility is merely to forward the overlay traffic.
For engineers thinking this forwarding model through, broadcast, multicast, and unknown unicast (BUM) traffic that requires flooding might seem to pose a problem, as BUM frames would be hidden from the underlying network hardware by the overlay. Hedlund says that, “at the edge hypervisor, we have visibility into all of the end hosts. When a VM turns on, we know its IP address and MAC address right away. We don’t have to glean that or learn that through networking protocols." Since all the endpoints are known to NSX, there’s no requirement for unknown unicast flooding. Multicast and broadcast packets are copied from hypervisor to hypervisor.
Overlays are not all there is to the NSX network virtualization message, though. Scott Lowe, VMware engineer architect, says “one of the huge value-adds for NSX is we can now bring L4-L7 network services into the virtual networks and be able to provide these services and instantiate them and manage them as part of that virtual network.”
And by L4-L7 network services, he means distributed firewalls and load-balancers. As a part of NSX, VMware offers these additional components because it allows for greater network efficiency. In traditional network models, centralized firewalls and load-balancers must have traffic steered to them for processing. For host-to-host traffic contained within a data center, this means the direct path between hosts must be ignored in favor of the host-to-host path that includes the network appliance.
NSX addresses this issue by placing these services inline at the network edge, as a part of the hypervisor vSwitch traffic flow. What’s more, these services are managed by the NSX controller, reducing the elements a network operator is responsible for.
Despite the availability of NSX’s L4-L7 services, VMware recognized that customers might want additional capabilities, so NSX will include support for third-party appliances. “We’re not going to try to be the best load-balancer in the world or the best firewall in the world and beat everybody at features, Hedlund says. “We’re going to try and provide 80% - most of the features a customer would deploy. But if there’s that extra feature you need from a specific firewall or load-balancer, we want to provide a platform for those to be integrated in."
Indeed, VMware announced NSW with a budding partner ecosystem, listing Arista, Brocade, Cumulus, Palo Alto Networks, Citrix, F5, Symantec, and several others as vendors with products that integrate into the NSX environment.
Despite a robust network virtualization platform and existing customers dating back to the Nicira days, NSX has its critics. A chief concern expressed by the engineering community surrounds NSX’s lack of communication with network switching hardware, relying heavily on vSwitch programmability to fulfill network virtualization goals.
While VMware has done its best to contradict this notion, the fact remains that NSX simply does not have specific insight into all of the network hardware forming the underlay fabric the NSX overlay rides on, which has implications for everything from traffic engineering to fault isolation and load distribution. That’s not to say NSX has no knowledge of the physical network, but rather that most of what NSX does know is inferred.
VMware’s official blog site goes into depth explaining that NSX can help isolate problem domains to point administrators in the right direction when troubleshooting an application problem, including a problem with the physical network. But to NSX, the physical underlay network is largely a cloud where tunnel packets enter on one side and exit on another.
In addition to the network hardware criticism, early reports from organizations exploring NSX cite pricing as an adoption barrier. VMware and 80% owner EMC are no stranger to complex SKU build sheets and costly licensing schemes that make IT organizations wince, and reportedly NSX is no exception. That said, folks within VMware say they are aware of customer concerns in this area and wish to avoid another “vTax” public relations debacle. Suffice it to say, potential NSX customers need to stay tuned in this area.
Cisco ACI
The name Cisco chose for its SDN effort, Application Centric Infrastructure (ACI), is significant because it sends a message. With ACI, Cisco is focused on shaping network infrastructure to the needs of specific network applications.
Does that include network virtualization? Certainly. But with ACI, network virtualization isn’t the whole story. Rather ACI is an entire SDN solution wrapped around the idea that IT applications are the most important thing in an organization.
In that sense, it’s difficult to compare NSX and ACI directly. While there is some functional overlap between NSX and ACI, ACI doesn’t merely answer the question, “How can a network be virtualized?” Rather, ACI answers the question, “How can networking be transformed to revolve around an application’s needs?”
As complex and nuanced of a solution as NSX is, ACI is both broader in scope and more novel in approach. An organization could conceivably run NSX over ACI – but not the other way around.
All of that said, ACI as an entire solution isn’t shipping yet. The ideas are all there. Significant amounts of code have been written. Product components have been named. But for customers, ACI doesn’t really exist. Customers who invest in available ACI components are investing in roadmaps that promise a complete ACI solution delivered over the 2014 calendar year.
Availability caveats aside, Cisco has spent a great deal of time describing ACI’s vision to the network community. The solution is complex, with many elements working together to rethink how networking is accomplished.
The most tangible element of the ACI platform is the Nexus 9000 switch line, which is shipping today. The 9000 switches are high-density 10GbE and 40GbE built on the idea of “merchant plus” silicon, as in merchant silicon plus custom Cisco ASICs. The merchant silicon is Broadcom Trident II, used by several other switch suppliers. The custom ASICs are used to aid in ACI service delivery, but the details about how and why have not been released by Cisco yet.
The Application Policy Infrastructure Controller (APIC) translates application policies for security, segmentation, prioritization, etc. into network programming. Cisco delivers APIC in a physical form factor with redundancy options, since delivering APIC as a virtual machine would present a “chicken and egg” problem. Mike Dvorkin, chief scientist and co-found of Insieme Networks, makes the point that, “For the [ACI] fabric to bootstrap, you need APIC. But for APIC to be installed and powered on as a VM, you’d need the fabric.
As with many SDN models, APIC sits in between applications and the network, translating what applications need into a network configuration meeting those needs. Cisco says that APIC is open, in that the APIs to access APIC data are to be made available to anyone wishing to write to them. In fact, customers will be able to download “open device packages” that allow network hardware not currently part of an ACI infrastructure to be exposed to APIC.