Americas

  • United States

How we tested Web front ends

Reviews
Jan 16, 20064 mins
Data CenterServers

How we tested Web front-end acceleration devices.

Return to Clear Choice Test introduction

A more detailed version of this methodology is available on Network Test’s Web site.

Given the wide variety of features found on various Web front-end acceleration devices, we used two selection criteria to help narrow the field. We tested only those devices supporting some form of TCP multiplexing and that were able to make forwarding decisions based purely on Layer-7 criteria such as URL contents.

We assessed devices in terms of concurrent connection capacity; TCP multiplexing; response time and transaction rates, both with and without access control lists (ACL) applied and distributed denial-of-service attacks under way; and maximum forwarding rates.

For all events, we set up a test bed that emulated as many as 2.35 million Web clients running Internet Explorer and 16 servers running Internet Information Services. We used two pairs of Avalanche and Reflector 2500 generator/analyzers from Spirent Communications to emulate clients and servers, a Summit 7i switch from Extreme Networks to connect devices and a virtual patch panel from Apcon to move between vendors’ systems.

To measure concurrent connection capacity, we configured emulated clients on the Avalanche appliances to request a 1KB object from the virtual IP address on each device under test. The device would distribute requests back to the servers emulated by the Reflector appliances. After each client received the requested object, it would wait 60 seconds before issuing its next request, thus building up concurrent connection count. We determined that the Avalanche and Reflector pairs could establish up to 4 million concurrent connections without a device on the test bed.

To ensure Layer-7 switching, we used URL lists in which every other request ended with an underscore character. We asked vendors to configure their devices to send requests without the underscore to the first eight servers, and to send requests with the underscore to the final eight servers. Thus, devices had to examine URL contents to decide where to send the request.

To measure TCP multiplexing, we configured the Avalanches (client emulators) to establish 100,000 connections with the device under test. Once all connections were established, we compared the number of established server-side connections and noted the ratio between the two.

For the TCP multiplexing test, we used a traffic load consisting of the Amazon, BBC, UCLA, White House and Yahoo home pages. These pages consist of a mix of text and graphical content types. To ensure Layer-7 switching, we asked vendors to configure their devices to send requests for graphical content (.gif and .jpg files) to the second set of servers, and requests for all other objects to the first group of servers.

To measure response times and transaction rates, we used the same “five-site” load as in the TCP multiplexing test, but with two major changes. First, we created three classes of users with different access speeds: dial-up (53Kbps), cable/DSL (1.5Mbps) and native LAN (1Gbps). Second, we offered traffic so that the number of users in each class was always held in a constant ratio of 40-to-40-to-20 for dial-up, cable/DSL and LAN users. We offered traffic at three different loads, with 1,000, 10,000 and 100,000 users, always holding to the 40-to-40-to-20 ratio of user counts.

To measure the effects of HTTP compression, we also ran tests in which 1,000 concurrent clients requested a single 500KB text object (which in theory should be highly compressible). Vendors generally opted to enable compression for only dial-up and cable/DSL clients for this test. For both the 500KB text and five-site tests, we measured transaction rates and average page and URL response times.

We also measured the effects of ACLs and distributed DoS attacks on each device. We took one test from the previous event – the five-site test with 100,000 concurrent users – and reran the same traffic with ACLs applied, and then again with a distributed DoS attack under way. In the ACLs test, we asked vendors to configure 20 filtering rules on their devices: 10 rules to deny traffic from a range of source IP addresses and 10 rules to deny traffic to a range of URLs. In the distributed DoS tests, we offered the same traffic as in previous tests while simultaneously launching two forms of distributed DoS attack on the device’s virtual IP address.

Differences that affect tests | Next: Why results vary >