• United States
by nobody

How we did it

Dec 16, 20023 mins

How we tested the performance of the various ISP backbones.

To take part, an ISP had to agree to install cNode traffic generator/analyzers directly on its backbone network. ISPs also had to let us retrieve measurements from the cNodes.

We asked the providers to install cNodes in their backbone points of presence in New York, Chicago, Dallas and San Francisco.

Only two ISPs, Savvis Communications and WilTel, maintain POPs in all four locations. The other providers had POPs near these cities. For example, Verio’s West Coast POP was in Palo Alto rather than San Francisco. Most location differences were minor, with two exceptions: Qwest put cNodes in Washington, D.C., and Houston rather than New York and Dallas.

The cNodes work in pairs: One transmits a continuous stream of packets to the other, which records measurements about the traffic it receives. At periodic intervals, a central database polls the receiving cNode to collect its measurements. Because we deployed cNodes in four locations for each ISP, and each cNode transmitted TCP and User Datagram Protocol (UDP) streams, each cNode offered six streams (one for each of two traffic types and one for each of three locations). The TCP packets were 1,518 bytes long, while the UDP packets were 256 bytes.

Even though the Fast Ethernet (100M bit/sec) interfaces of the cNodes run at far lower rates than the ISP backbones we measured, we still took precautions not to blast packets onto the backbones. We configured the cNodes so that the aggregate transmit rate for all streams would not exceed 512K bit/sec.

The cNodes measure traffic more than 70 different ways, but for this project we focused on just three sets of measurements: Outages, jitter and loss.

We shipped at least four preconfigured cNodes to each provider, and it was up to the ISP to attach the devices to its network. We also required ISPs to give us a second, out-of-band connection to let the cNode’s measurements be collected by a central database.

Once all the cNodes were up and running, we gave all ISPs a six-week ramp-up period during which they could view our measurements of their networks. Once official testing began, we blocked ISP access to the measurements.

The official test window began at 1 p.m. Universal Coordinated Time (UTC) on Aug. 1, and continued until 1 p.m. UTC on Aug. 29. Configuration issues on C&W’s network prevented us from obtaining reliable results the first week of the test. C&W’s official start date was 1 p.m. Aug. 8, and its stop date was the same as the other ISPs.

During the official test, each cNode at each POP sent streams of TCP and UDP packets to the three other locations for a given ISP. The receiving cNodes took measurements on an ongoing basis, but summarized these measurements every 5 minutes in a measurement window, or “mwindow.” Periodically, a central database would collect mwindow statistics from the cNodes.

After the test period ended, we used a combination of database queries and perl scripts to collate all the measurements. We distributed individual results to ISPs before publication and encouraged each provider to comment on its own numbers.