Aruba conquers challenge of Wi-Fi scalability

Two WLAN vendors brave our massive test plan.

Every Wi-Fi vendor talks in their marketing material about scaling up for enterprise service, but how many actually walk the walk? We found out after conducting our largest Wi-Fi test ever.

In the end, Aruba's 5000 and 6000 controllers and Aruba 70 access points put up excellent numerical results in almost all our tests, earning the vendor a Clear Choice Award. The performance in some tests even exceeded theoretical limits, thanks to the vendor's use of a rarely implemented part of the 802.11 standard.

Meru, meet Murphy

A firmware upgrade killed the beta versions of the AP150 access points Meru supplied for testing, but not before we ran into several other issues. Along the way we experienced Power-over-Ethernet troubles (attributed to third-party PoE injectors, not Meru gear) and a QoS misconfiguration by Meru's engineer that invalidated results of our call-capacity tests.

We were disappointed we couldn't complete more testing with Meru, especially because it claims to handle VoIP over Wi-Fi so well. We appreciated that Meru showed up for the tests (see "Running scared?") and plan to benchmark Meru's products.

MC 3025 CONTROLLERS, AP150 ACCESS POINTSMeru Networks
$33,200 as tested (two 3025 controllers and 25 access points.)
Pros:Decent showing in throughput and latency tests with one access point.
Cons:Beta version of AP150 not enterprise-ready; unable to complete any test with 25 access points.

Let the tests begin

We assessed devices in four ways, with separate tests measuring throughput, latency, VoIP call capacity and data roaming (see "How we conducted Wi-Fi scalability tests"). Our key goal was to determine how well Wi-Fi systems scale for enterprise use. Ideally, a system will give each client the same level of performance, regardless of whether one user or hundreds are active.

We ran all tests twice, in small- and large-scale settings. For all but the roaming tests, we first conducted tests with a single access point and repeated the same test on as many as 25 access points. In our roaming tests, we first measured a single client roaming across 25 access points, and repeated the test with 500 clients, as 250 of them simultaneously roamed across 25 access points. We dubbed this last test the "merry-go-round of death" because of its high potential for disconnects.

Radio-frequency interference is a concern in any Wi-Fi test, and it's especially relevant in a large-scale benchmarking like this. To reduce interference, we placed each access point in a shielded chamber and connected it to the VeriWave test instruments using a cable instead of sending signals over the air. APC supplied its NetShelter VX rack enclosures to house this mountain of equipment.

Throughput testing

A throughput test is supposed to determine the maximum rate at which a device forwards traffic with zero loss. In practice, our tests uncovered a design flaw in the 802.11 protocol that prevented us from testing with zero-loss conditions. We compensated for the design flaw by allowing loss of 0.1% in our tests.

Aruba came pretty close to the goal of giving each user the same level of performance regardless of the number of users. On a per-access-point basis, Aruba's system moved traffic nearly as fast through 25 access points as it did with a single access point. Meru's single-access-point numbers were well below Aruba's and the theoretical limits, especially with 512- and 1,518-byte frames (see "Throughput" graphic, below).

Throughput

 Theoretical max, 1 APAruba, 1 APMeru, 1 APTheoretical max, 25 APsAruba, 25 APs
88-byte frames3.88 Mbps6.42 Mbps3.39 Mbps96.97 Mbps145.45 Mbps
512-byte frames16.68 Mbps23.72 Mbps14.86 Mbps417.11 Mbps560.46 Mbps
1,518-byte frames30.86 Mbps37.61 Mbps27.97 Mbps771.54 Mbps916.16 Mbps

In all cases, Aruba's throughput was higher than the theoretical maximum. This isn't caused by a standards violation; on the contrary, Aruba implements parts of the 802.11 standards most other vendors don't. Aruba boosted data rates in our tests through use of dynamic RF management and a little-implemented mechanism in the 802.11 standard called the point coordination function (PCF).

In a normal setting, 802.11 stations operate in distributed coordination function (DCF) mode, with constant contention for bandwidth. With PCF, an access point can declare a contention-free period (CFP), during which clients should not attempt to transmit. The access point is then free to send data frames at a higher rate than it would in DCF mode.

Aruba's controller continually monitors RF status and classifies traffic. The controller can schedule CFPs on the access points dynamically when conditions suggest data rates would increase as a result. This benefited Aruba greatly in the throughput tests, with results higher than the theoretical maximums calculated for DCF-only operation.

The latency round

We measured the amount of time each system adds between sender and receiver (latency), as well as jitter (latency variation). For some companies, these figures may be even more important than throughput when it comes to application performance. Voice and video, for example, are highly sensitive to delay and jitter, and performance of any application will suffer if delay or jitter rise high enough.

In our experience, latency at the throughput level is far higher than when a device isn't fully loaded. To describe this phenomenon, we measured average latency twice - once at the throughput level, and again at 10% of theoretical line rate. One number is the worst-case scenario, while the other is a better predictor of average performance on a lightly loaded enterprise network segment (see "Latency and jitter" graphic, below).

Latency and jitter

(Times in milliseconds)
Aruba (1 AP)88-byte frames512-byte frames1,518-byte frames
Throughput rate latency4.4532.271.99
Throughput rate jitter2.3711.190.91
10% of line-rate latency0.270.310.43
10% of line-rate jitter0.060.060.07
Meru (1 AP)88-byte frames512-byte frames1,518-byte frames
Throughput rate latency6.053.724.75
Throughput rate jitter3.082.112.35
10% of line-rate latency0.440.500.74
10% of line-rate jitter0.660.660.65
Aruba (25 APs)88-byte frames512-byte frames1,518-byte frames
Throughput rate latency6.5216.2239.71
Throughput rate jitter9.6120.69 0.66
10% of line-rate latency1.161.422.17
10% of line-rate jitter0.500.661.16

Notable among the latency and jitter tests:

* Although some results appear high (especially for large frames at the throughput rate), that is only in comparison to other results. None of these numbers is likely to have a significant impact on the performance of any application. While Aruba's system did have relatively high latency with large frames in the 25 access-point scenarios, it's unlikely to have an effect on VoIP, which uses small packets.

* Latency and jitter at the 10% load were far lower - usually 90% or more - than at the throughput rate. This suggests latency and jitter are even less significant on lightly loaded network segments.

* Jitter was a significant fraction of (and in some cases, a multiple of) average latency. Again, the jitter and latency numbers are not likely to degrade application performance by themselves. But jitter could have an adverse effect on TCP performance in congestion situations where there's packet loss.

Scaling our voice

When it comes to VoIP over Wi-Fi, previous tests we've done suggest there are issues with scalability and QoS enforcement - and that was with only a handful of access points. In this test, we wanted to see how VoIP over Wi-Fi would scale to higher levels using 24 access points handling hundreds of calls, while also prioritizing voice traffic over data.

To assess voice-over-Wi-Fi scalability, we began with tests of single-access-point call capacity; these gave us a baseline to use with our 24-access-point tests. We had two goals: to determine the maximum number of concurrent calls the system could handle, and to measure latency and jitter for voice traffic.

For tests of a single access point, we configured the VeriWave gear to originate calls from clients on a wired Ethernet segment, destined for Wi-Fi clients. This was typical real-time protocol (RTP) voice traffic, involving 200-byte IP packets. We configured the test instrument to offer background data consisting of large, 1,492-byte IP packets at about 15Mbps. The voice traffic used 802.11b rates (because most Wi-Fi handsets support only this older protocol), while the data traffic used 802.11g rates. To do well in our test, a Wi-Fi system would have to prioritize the shorter, slower VoIP packets ahead of the larger, faster data packets.

Aruba said before the test its capacity was 11 concurrent calls per access point, and our tests verified that. Tests with 12 calls resulted in at least one drop, but 11 concurrent calls always worked.

Average latency was around 36 millisec, and jitter was about 19 millisec in the single-access-point test (see "Voice call capacity" graphic, below). Because we used synthetic voice traffic, we couldn't measure audio quality directly. Previous experience with voice metrics based on latency and jitter (such as the ITU's R-value) suggests that neither latency nor jitter would degrade perceived audio quality.

Voice call capacity

(Times in milliseconds)
 Aruba (1 AP, 11 calls)Aruba (24 APs, 264 calls)
Average latency35.9328.57
Jitter18.8219.55

Aruba's system performed better in the 24-access-point tests than in the test with a single access point, suggesting that scalability for VoIP is no problem. Again, Aruba's system handled 11 calls per access point, totaling 264 calls across all 24 access points. Average latency decreased to 28.5 millisec, compared with 36 millisec in our baseline test, while jitter rose by less than 1 millisec. Again, it's unlikely that either number would degrade perceived audio quality.

Given the huge mismatch in speeds and packet lengths between voice and data traffic, it is clear that Aruba's QoS enforcement worked in our tests. While Aruba doesn't yet support 802.11e (the emerging IEEE standard for Wi-Fi QoS), its controller uses a stateful firewall to classify traffic. A combination of classification on the controller and scheduling on the access point let the Aruba system prioritize voice traffic.

ENTERPRISE WI-FI ARUBA 6000 CONTROLLER, ARUBA 5000 CONTROLLER, ARUBA 70 ACCESS POINTS

Aruba Networks

4.6
Price:$74,000: includes $28,000 for Aruba 6000 controller with two switch modules, $21,000 for Aruba 5000 controller with one switch module and $595 per Aruba 70 access point.
Pros:Excellent performance in most tests; excellent radio-frequency management; clever use of the point-coordination-function mechanism boosts throughput.
Cons:Slightly lower performance in some tests with 25 access points than with one access point.
The breakdown
Throughput 20%5Scoring Key: 5: Exceptional4: Very good3: Average2: Below average1: Subpar or not available
Latency and jitter 20%4

VoIP call capacity 20%

5
Single-client roaming 20%4.5
Multiple-client roaming 20%4.5
Total score4.6 

Roam if you want to

As with our other tests, roaming with Wi-Fi may have as-yet-undiscovered scalability issues. Most roaming tests have measured the time needed for a few clients (at most) to move from one access point to another. That is a valid measurement, but it doesn't predict necessarily what will happen in the enterprise, where hundreds of clients may roam among dozens of access points at any given instance. The latter situation places more stress on wireless switches and concentrators.

To measure roaming on a large scale, we devised our "merry-go-round of death." This very large carousel involved 25 access points, 500 clients and short packets offered at high rates. Half the clients roamed across all 25 access points during the test, while the other half stayed put. We measured roaming times, failed roams and packet loss during roaming events.

Like our voice-capacity test, we began with a baseline: one client roaming across all 25 access points. Ideally, average roaming times in the baseline and merry-go-round tests should be nearly identical. There shouldn't be any time or packet loss penalty for any one roaming client just because lots of other clients are roaming at the same time.

In our tests, average roaming times increased slightly in the merry-go-round scenario for Aruba. Average roaming time jumped from 16.5 millisec with one client roaming, to 26 millisec with 250 clients roaming. While this sounds like a large increase, it was unlikely to degrade application performance significantly.

Average packet loss per roam improved significantly in our merry-go-round tests. In the single-client test, the client lost an average of 95 packets each time it moved from one access point to another. In our 250-client test, each client lost an average of 12.9 packets per roam. This suggests there is no packet-loss penalty in scaling up roaming with the Aruba gear.

One place where performance did degrade was in maximum roaming times, which jumped nearly tenfold, from 25.9 millisec in our single-client roaming test to 223.2 millisec in the merry-go-round tests. Certainly, any roam that takes nearly a quarter-second could affect application performance. An analysis of the distribution of roaming times, however, showed there were only around 30 roams lasting 100 milliseconds or more out of the 12,500 roams in the test. While high roaming times aren't strictly an anomaly, they were the exception rather than the rule.

1 2 Page 1
Page 1 of 2
Now read: Getting grounded in IoT