How we tested application-acceleration devices

We tested application acceleration devices for performance, features, manageability and usability.

We tested application-acceleration devices for performance, features, manageability and usability.

To assess performance, we constructed a test bed modeling an enterprise hub-and-spoke network with five sites: a headquarters in Boston and branch offices in Portsmouth, N.H.; Newton, Mass.; El Segundo, Calif.; and San Francisco (see graphic “The Logical Test Bed”).

We used a Spirent Converged Network Impairment Emulator to emulate WAN rates and delays. The Newton and San Francisco remote links ran at T1 (1.5-Mbit/s) rates, while the other two ran at T3 (45-Mbit/s) rates (see graphic “The Physical Test Bed”).

The Newton and Portsmouth links used 16-millisecond round-trip delays, while the other two used 100-millisec round-trip delays, representing all permutations of low and high bandwidth and delay.

We measured application acceleration performance with CIFS/SMB Windows file transfers, Outlook/Exchange, HTTP and SSL traffic. In a separate test, we assessed devices’ QOS capabilities by generating VoIP traffic while simultaneously loading the network with HTTP traffic. We developed custom software for this project to generate CIFS/SMB and Outlook/Exchange traffic.

To measure bandwidth reduction, we used a ClearSight hardware-based analyzer with taps in both the Boston LAN and WAN sides of the test bed. To measure application-response time, our custom software measured CIFS and MAPI transfers.

For the CIFS file transfers, two clients at each remote site simultaneously sent and received Microsoft Word documents from the Boston site. Clients on T3 links transferred 750 files in each direction, while clients on T1 links transferred 25 files each way. We ran each CIFS test three times: a “cold” run with empty device data stores, a “warm” run once the data store had been populated and a “modified” run in which we altered the contents of 10% of the files.

To measure MAPI/Exchange performance, Outlook 2007 on each client created 1,000 or 34 messages for T3 or T1 circuits, respectively. Some of the messages had Microsoft Word files as attachments, and some were simple text. Each client sent messages to all other clients, but always through and Exchange 2003 server at the Boston site.

To measure HTTP performance, we configured the Spirent Avalanche and Reflector 2500 appliances to emulate Web clients and servers, respectively. As many as 2,048 clients at remote sites requested 11KB objects from servers at the Boston site. We measured HTTP response time and transfer rates. We repeated these tests twice, once with 256 clients across all remote sites, and again with 2,048 clients.

To measure SSL performance, we repeated the HTTP tests using HTTPS, loading server certificates on the acceleration devices where they support SSL proxying.

To assess devices’ QOS capabilities, we simultaneously offered small amounts of VoIP and large amounts of HTTP traffic. To generate and measure VoIP traffic, we used GL Communications’ PacketGen and VQT products to set up and measure SIP/RTP calls.

We again used Spirent Avalanche/Reflector for HTTP traffic. In these tests, we compared VoIP audio-quality measurements with and without HTTP present. As an added test of QOS functionality, we also checked whether devices could classify and remark the Diff-Serv code points (DSCP) for voice and Web traffic.

We also measured the maximum TCP connection capacity of the Boston device. In this test, the Avalanche appliance emulated a Web client requesting a 1KB object every 60 seconds. Because Web clients use HTTP 1.1, this test establishes a large number of TCP connections over time. We attempted to measure a maximum connection count supported by the Boston appliance to the nearest 1,000 connections.

Much of our assessment for functionality, manageability and usability occurred in the course of running the performance tests. For example, most vendors in this test issued one or more software upgrades in the course of this project. We compared vendors’ management tools in terms of ease of use for pushing out a new software image to multiple remote sites.

In the area of device functionality, we attempted to describe device taxonomy. The devices we tested work in different ways: The network design and functional differences included whether devices sit inline, whether they tunnel traffic or send traffic transparently, whether devices compress traffic in-flight or support multiple connections (and possibly load-sharing) between sites and offer high-availability features.

To assess device manageability, we performed common administration tasks and rated each system on its ability to perform these tasks. In addition to the aforementioned software upgrade, we attempted to make configuration changes from a central management platform and then push the change out to all appliances with a single operation.

We subjectively assessed each vendor’s ability to present a coherent view of all devices from a central location. We looked for the ability to define multiple administrative roles, where different classes of managers had different privilege levels. We compared devices’ ability to present aggregated and individual device logs on a central system. We looked for support for management via command-line interface as well as through a centralized Web or proprietary user interface.

Our usability assessment was mainly a set of subjective judgments on the ease of accomplishing all the foregoing tasks. We also looked for a few features that make products easier to use, such as autodiscovery of devices and application types.


< Return to main test story

Learn more about this topic

Buyer's Guide: Application-acceleration management

Podcast: How to test WAN-optimization gear

Vista over the WAN: good but not great

06/18/07

Dual WAN router testing: Good and bad about load balancing

01/04/05

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Must read: 10 new UI features coming to Windows 10