The squeeze play is all part of the game

With compression, you'll win some and lose some.

Clear Choice Test includes contentious issues surrounding compression.

While scalability tests help to size a Web front-end device for a particular network (see scalability test results), services tests are arguably more important, because they more closely approximate application behavior.

In this series of tests, we measured transaction rates and response times for a given number of users, both with and without HTTP compression applied, and again with and without access control lists (ACL) in place and distributed denial-of-service (DoS) attacks underway.

The most contentious area of this project was the effect of HTTP compression on application performance (read about our attempts to get the right user mix to test compression).

Compression seems like a simple enough idea: Put the squeeze on data headed to clients, and it will arrive faster. Because smaller compressed objects take less time to send than larger uncompressed ones, response times should fall and transaction rates may rise.

That’s the theory, anyway. In practice, our results suggest compression is only effective under some circumstances. It takes time to compress an object, raising concerns as to whether the delay outweighs the bandwidth savings.

Typically, compression makes the most sense for clients with low-speed access, such as those coming in via dial-up or cable or DSL circuits. But add a little bit of LAN traffic to the mix, and compression benefits can disappear. Worse, in some cases HTTP compression degrades response time and transaction rates — even when only dial-up and cable or DSL users are present. We compared the results of tests run with compression disabled and enabled. Spirent Communications’ Avalanche and Reflector test tools offered the same traffic in both cases, and we measured transaction rates and response times to determine the effects of compression.

We ran the tests with two traffic loads: first with a 500KB text object, and again with the home pages from five popular Web sites —, the BBC, UCLA, the White House and Yahoo — along with all images and other objects from those pages. We chose the 500KB text object because (at least in theory) it was highly compressible, and the five-site test because it represented a more typical mix of text, images and other content types found in production settings.

Foundry’s ServerIron does not support HTTP compression, though the vendor says it plans to do so this quarter. We asked all other vendors to enable compression for dial-up and cable or DSL users.

We expected tests with 500KB text objects to show the biggest benefit from HTTP compression, but that was not always the case. As it turns out, getting any benefit from compression depends on transaction rates.

In tests with 1,000 concurrent users and relatively low transaction rates, all products showed significantly reduced response time when we enabled HTTP compression. Juniper’s single-box solution edged out Citrix’s device for the biggest reduction in response time, with both vendors delivering data more than 2.5 times faster with compression enabled. For users on low-speed lines, this is a very significant speed boost.

We added a 12,000-user test because there was relatively little differentiation among transaction rates in the test with 1,000 users. For all devices, rates for the 1,000-user tests hovered between 11 and 15 transactions per second regardless of whether HTTP compression was enabled. We don’t necessarily expect HTTP compression to increase transaction rates, but the lack of differentiation suggests the test doesn’t push any of the boxes all that much. A good benchmark should be stressful.

Ideally, response times for compressed and uncompressed data should have been about the same in the 1,000- and 12,000-user tests. Not only was this not the case, but in some instances response times were much higher with compression than without it.

The Array, Citrix and Juniper single-box solutions showed increased rather than decreased response times. Citrix’s NetScaler Application Delivery System and Array’s TMX5000 had the most trouble, raising response times by factors of 3.1 and 2.6 respectively. Just as the easier test sped up delivery of data, these results essentially mean a heavily loaded device would deliver data roughly three times slower.

Citrix supplied Version 6.0 of its software for testing but says Version 6.1, now generally available, produces a much lower response time. In internal tests, Citrix says, it sees response time fall from 34 seconds to 18 seconds when HTTP compression is enabled with 12,000 users; we did not verify this.

Not all devices struggled in this test. The Crescendo, F5 and Juniper four-box solutions all reduced response time by some degree, just as they did in the earlier tests. Crescendo’s device did especially well here because of its use of hardware-based HTTP compression and forwarding, which made its response times among the lowest and most consistent across all tests.

Citrix representatives raised a number of objections to this test. First, they asserted that the Citrix device would have done much better if we had enabled both compression and caching. That’s certainly plausible; caching would have freed up the Citrix box from having to fetch all objects from servers, and also might have given the device a chance to precompress objects. With caching, we might not have stressed the NetScaler Application Delivery System’s compression engine nearly as much as we did. The Array, F5, Foundry and Juniper devices also support caching and might also have improved their results.

Second, Citrix representatives said the test was “not real-world,” because no customer network has 12,000 users simultaneously requesting 500KB objects. That’s correct, but misses a key point: Just as we test switches or routers with loads consisting exclusively of all small or large packets, the goal of an application load test is to describe limits of device performance. To find those limits, tests should be stressful.

Third, Citrix noted that the Avalanche client emulator does not cache objects, while a real browser would. Here again, browser caching would have considerably lightened the load on all devices, including the NetScaler Application Delivery System — and the test would have been considerably less stressful.

While there is merit to all of Citrix’s objections, we don’t see this as a torture test. Even with 12,000 concurrent users, transaction rates were still very low because of the low access speeds involved.

Citrix’s device handled 130 transactions per second in the uncompressed test, and 65 transactions per second with HTTP compression enabled. In contrast, the Crescendo, F5 and Juniper four-box solution showed increased transaction rates with HTTP compression enabled. But even the fastest box — Crescendo’s, with compression enabled — handled fewer than 200 transactions per second in this event. Further, all clients came in at rates of 1.5Mbps or less, and also waited 60 seconds between requests. Those are hardly the kinds of conditions we’d expect to lead to performance degradation.

We also used five Web sites’ home pages to measure response times and transaction rates. In this test, there were 100,000 concurrent users active.

Because the home pages combine text with less-compressible or uncompressible objects such as graphics files, we expected to see a smaller difference between the uncompressed and compressed test cases.

That’s generally what happened (see graphic). With the 500KB text object, compression cut response time in half. With the five-site test, differences between the product results the latter in response time were more typically around 20% or 30% —

still significant, but nowhere near as much as with highly compressible text objects alone.

Crescendo’s device showed the greatest benefit in response time, but that’s partially because its page response time without compression was relatively high. Even so, Crescendo’s CN-5080E delivered the second-lowest response time in this test, about 400 milliseconds behind Juniper’s four-box solution.

On the other hand, Juniper’s single-box solution turned in by far the highest page and URL response times of any device tested. Further, a single DX 3600 showed increased response time when we enabled HTTP compression. Judging from its high CPU utilization during this test, the load of 100,000 users all asking for five home pages with more than 200 objects was simply too heavy a load for a single DX 3600.

Most of the other other devices weren’t far behind the Juniper four-box and Crescendo devices, with page response times averaging between 20 and 25 milliseconds. In all cases except that of Juniper’s one-box solution, response times improved with HTTP compression enabled.

We also measured transaction rates with the five-site load (see “Compression and transaction rates with five popular Web sites”). The results were interesting in several ways. Unlike the 500KB text-object tests, there wasn’t much difference between test cases with and without compression. That’s not too surprising, considering that the amount of compressible data was a much smaller part of the total than in the 500KB text-object tests (where everything was compressible).

Strangely, transaction rates for Foundry’s ServerIron in this five-site test were much lower than those of most of the other devices. Foundry also was puzzled by these results. As in other tests, we obtained higher rates when running earlier versions of ServerIron code. Most other vendors handled about 10,500 transactions per second.

We used results from the five-site tests as baselines for two other measurements: performance with ACLs applied, and performance while under distributed DoS attack. In both cases, the goal was to determine whether there was any performance penalty from either of these conditions.

In the ACL tests, we asked vendors to configure their devices with 20 access-control rules. Ten of the rules blocked traffic from IP subnets, and 10 blocked traffic headed to specific URLs.

Not all vendors were able to do this. Array’s TMX5000 cannot block traffic to given URLs, so we instead applied rules denying access from 20 source IP networks. In contrast, Crescendo’s CN-5080E supports only filtering on URLs; it cannot block traffic from given source subnets. In Crescendo’s case, we used rules blocking access to 20 URLs.

We also asked vendors to configure safeguards against distributed DoS attacks, not only for this test, but for all tests we conducted. On the theory that “attackers don’t make appointments,” we did not tell vendors what attacks we would use, either before or after the test.

There was good news in both the ACL and distributed DoS results for all vendors. With ACLs applied, response times and transaction rates for all devices were virtually identical to their baseline numbers. At least with 20 rules applied, there doesn’t seem to be a performance cost to ACLs for any device.

The distributed DoS results also were nearly identical to the baseline numbers, but this test did require changes on the part of a couple of vendors. Crescendo’s system rebooted the first time we launched the distributed DoS attack, but a later (generally available) software release from the vendor corrected the problem. Juniper’s systems also became unresponsive during this test until the vendor tweaked its memory management settings.

All systems informed us they were under attack, although with varying levels of detail. We used two attacks in this test, one based on TCP and another based on Internet Control Messaging Protocol. Foundry’s ServerIron reported on both forms of attack, but we needed to delve down into a debug prompt to see whether the system saw the ICMP attack.

< Previous: Introduction | Next: Scalability tests >

Learn more about this topic

Whale’s SSL VPN gear adds compression


New appliances put squeeze on data


Layer 5 is the best place to attack WAN optimization 03/21/05


Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Copyright © 2006 IDG Communications, Inc.

IT Salary Survey: The results are in