Riverbed wins 7-vendor WAN optimization test

Challengers Ipanema and Exinda score high marks for innovation

1 2 3 4 Page 4
Page 4 of 4

We found two significant issues with Cisco WAAS and network transparency, depending on whether you use the standalone WAVE devices or the WAAS integrated into IOS. With standalone WAVE devices, Cisco’s WAAS software is incompatible with any good firewalls that check TCP sequence numbers. If you have a good firewall, you need to turn off TCP sequence number checking -- a security function that was put into the firewall for a good reason. Cisco’s own ASA firewall has a special WAVE-compatible mode to handle this problem, so if you have ASA firewalls and want to put in standalone WAVE devices, you don’t have to turn off this important security feature for all connections, just those through the WAVE device.

On the other hand, if you choose to use IOS-integrated WAAS, then you have a different transparency problem because the IOS-integrated version cannot operate as a bump in the wire transparently to existing networks. That works well with Cisco’s positioning of IOS and IOS-XE as the do-it-all box for branch offices, but network managers who want to keep separate devices or who haven’t committed to an all-Cisco WAN won’t be able to deploy the IOS-integrated WAAS without significant re-engineering.

We also marked down Citrix CloudBridge 2000 in the ease-of-use area because device-to-device communications are only automatically managed when you’re not accelerating SSL connections. Once you get SSL into the picture, then Citrix CloudBridge requires the network manager to do much more explicit management and configuration of topology, complicating deployments unnecessarily. 

We ran into another significant usability issue with CloudBridge 2000 when we rebooted our test system and the SSL stopped working. Why? Because Citrix CloudBridge requires the network manager to log into each device after it has been rebooted and manually unlock the SSL key store before it will start accelerating SSL connections again. Yes, that’s more secure, but you can’t turn it off, and if you put these devices in a branch office that has less than perfect AC power, you’re gaining a world of aggravation. SSL seems to be a particular weak spot for the CloudBridge 2000.

Another flexibility area we evaluated was virtualization. Not every network manager will want it, but for those that do, it’s an important feature. Every product we tested supports deployment as a virtual machine. In fact, some prefer it: Silver Peak told us that it would rather run the VX-series as a virtual machine unless you’re hitting their top performance tiers. Others are running as VMs whether you want to or not. For example, the Citrix CloudBridge 2000 we tested is really a virtual machine sitting on a Citrix hypervisor, while the Cisco ISR-XE WAAS is a virtual machine running inside of IOS. 

Even more interesting is the option to use the optimization device as a virtualization host all by itself. This might let you run a small file and print server or Windows domain controller in a branch office, combining two pieces of hardware into one. Riverbed and Cisco have already shipped this capability (although Cisco does require additional hardware in the form of a UCS-E blade), and both Citrix and Exinda told us that they have plans in this direction on some hardware platforms.

Wan Optimization: The Virtual Option

All of the vendors we tested now offer virtual versions of their products to run in branch offices. Many also offer virtual versions for data center deployments as well, and there are a number of true believers in the pack.  

Citrix, for example, ships their product on their own hypervisor.  If you buy an appliance, they’ll send you a server -- but the CloudBridge software is running in a VM.  

Silver Peak is so confident in their virtualization performance that it didn’t even ship us dedicated appliances -- it had us run their software as a VM within VMware. Because our testing proved that Silver Peak kept up or beat the rest of the market when it came to performance improvements, we’re true believers now too. If you want to run this stuff in a VM, go ahead.  It works great.

How we tested network optimization products

We tested network optimization devices in our lab to evaluate their capabilities in eight areas: performance, traffic management, visibility, application controls, data link management, enterprise suitability, network flexibility, and ease of use.

We designed a small network based on Cisco routers and Juniper firewalls to simulate the operation of a large enterprise wide-area network.  To reproduce the conditions of an intercontinental network, we used InterWorking Labs Maxwell link emulators to selectively introduce bandwidth bottlenecks, latency, and errors into the network. Our goal was to simulate a network of about 100 sites connected via standard IPSec tunnels using approximately 45Mbps of bandwidth over the WAN.  

At the edges of the network, we used virtualization software from VMware to host various network performance testing tools, including open source products as well as our go-to solution, Spirent’s WebAvalanche. To get the performance we needed to take our network to 45Mbps, we spun up eight separate Spirent virtual machines and spread the load across all eight systems to ensure that the network was the bottleneck, not the testing systems.

We asked each vendor participating in this test to provide two devices that would be sufficient to handle 2,000 users and an aggregate bandwidth of 45Mbps for our performance testing. We also asked for a lower-speed device to be used for functional testing and evaluation of any central management capabilities.

Once we had our test lab put together, we worked with Spirent and David Newman at Network Test to develop testing plans for the most common enterprise protocols: web browsing (both encrypted and unencrypted) over HTTP, electronic mail, thin client using Citrix XenDesktop, Voice over IP using SIP, and basic bulk data transfer. We tried to mix compressible traffic with non-compressible traffic to see how these devices would perform in an enterprise setting.

Although we acknowledge that many enterprises are still using CIFS (SMB) file sharing across their WAN, we chose not to try to test the performance of CIFS optimization because of the immense variation in use patterns. Network managers who are looking at these products as a way of optimizing CIFS traffic should design their own tests to tease out the huge differences in CIFS support across products.  

We ran baseline tests across our testbed to determine transaction rates without any specific network optimization or traffic management technology. We ran each test three times, taking the average of the three runs and re-doing tests as needed until we achieved consistent and repeatable results. Our testing was done at five round-trip latencies: 0 ms, 50 ms, 100 ms, 200 ms, and 700 ms. These represented across-town, regional, national, international and satellite latencies.  

Then, using configurations designed and approved by each vendor, we re-ran our tests to saturate the testbed’s WAN circuits at the five latency levels. Our goal was to measure the increase in performance in the presence of optimization technology. Our main metric was transaction count rather than raw throughput, as we feel that this better represents the experience that end users have in a WAN environment. Network managers looking at this technology for other reasons, such as data center-to-data center replication, should not consider our WAN testbed a good simulation of inter-data center traffic.

For performance testing, we looked at only one type of traffic at a time. When it came time to test traffic management, we mixed up different types of traffic, prioritizing and allocating bandwidth to prefer real-time traffic over bulk transfers.  

To test visibility capabilities, we gathered approximately 10Gb of traffic from a live corporate WAN, mixing both corporate and Internet applications together. We played this traffic back through the optimization devices. This gave us a nice mix of applications so that we could see how different applications were identified and categorized in the various management systems.

We chose to test standards-based mail protocols, rather than encrypted MAPI, because we believe that most security-conscious enterprises have switched from direct MAPI connections to either RPC-over-HTTP (available since Outlook 2003) or standards-based email protocols IMAP and SMTP. However, for enterprises that are still using old-style MAPI protocol, several of the vendors we tested have specific support for encrypted MAPI. Network managers looking to add optimization technology should make sure they have a very clear discussion with their email team to be sure everyone understands exactly what protocol is in use. A few packet dumps wouldn’t hurt either.  


Network World would like to express its thanks to Spirent, for providing WebAvalanche software and support, and InterWorking Labs, for providing Maxwell link emulators.

Snyder, a Network World Test Alliance partner, is a senior partner at Opus One in Tucson, Ariz. He can be reached at Joel.Snyder@opus1.com

Copyright © 2013 IDG Communications, Inc.

1 2 3 4 Page 4
Page 4 of 4
The 10 most powerful companies in enterprise networking 2022