802.11n gear 10 times faster than current Wi-Fi offerings

Throughput tops 250Mbps in groundbreaking test; Bluesocket wins

Testing 802.11n wireless LAN gear for enterprises means thinking big.

See how the products rated in our scorecard.

With the latest version of Wi-Fi promising vastly higher data rates compared with previous incarnations, a couple of laptops running a few FTP sessions through a single access point won't do.

Instead, Network World set up the largest public 802.11n test ever conducted. We invited all enterprise Wi-Fi vendors to supply not one but eight 802.11n access points, along with controllers if needed. Working with test instrument vendor VeriWave, we crafted test traffic from hundreds (and in some cases thousands) of virtual clients to see just how high the new 802.11n systems would scale, both in pure 802.11n settings and also with a mix of 802.11n and legacy clients. In all these tests, the goal was to determine 802.11n performance in an enterprise context.

Four vendors took us up on the challenge: Aerohive, Bluesocket, Motorola and Siemens. Some big names declined to take part, leaving us to wonder how ready their 802.11n offerings actually are (see "Big players missing in action"). We stand at the ready to test these products against our existing methodology, should they become comfortable enough to place their gear in a public test.

The vendors that did participate proved the adage that 90% of life is about showing up. Multiple vendors cracked the 2-Gbps mark in pure 802.11n throughput tests, pushing data rates of 250Mbps or more per access point. That's around a 10fold improvement in throughput over existing 802.11g and 802.11a access points, which makes a compelling case for considering 802.11n as a real alternative to wired connectivity to the enterprise.

Power is a big concern with the new systems, especially because some may need more juice than standards-based power-over-Ethernet (PoE) switches can supply. Some systems stayed within the limits of current PoE specs, while others may require upgrades to larger power supplies.

The new systems also showed rough spots in a few places. We couldn't complete throughput tests in some cases because access points became unresponsive or even rebooted. That's especially interesting given that all systems tested are built around the same Atheros radio module. The very different results speak to the different optimizations each vendor has done in working with the Atheros radios.

In the end, Bluesocket's BlueSecure access points offers the best combination of performance, power efficiency and features. Bluesocket's system wasn't the fastest we tested, but it exhibited consistently low latency and jitter, and it didn't suffer from some of the software bugs that hampered testing of other systems.

Each of the other systems had their own merits: Siemens' HiPath access points are extremely efficient with power, while Aerohive Networks' HiveAP offers an innovative alternative to controller-based designs and very high throughput. Motorola's new AP-7131 is still a work in progress and needs further software tweaks, but it too offers a unique design that soon will support up to three radios on the same access point, which will enable enterprises to use Wi-Fi and WiMAX on the same access point.

802.11n throughput and latency

We assessed all systems in terms of pure 802.11n performance; mixed-mode performance handling both 802.11n and legacy 11a and 11g clients; performance with a mix of common enterprise application types (our "WiMix" test, in which wireless clients handle a mixture of different frame sizes; power consumption; and system features.

"How fast will it go?" is understandably the first question when it comes to assessing 802.11n technology. We sought to answer that question by measuring throughput across eight access points, each moving traffic between 20 wired and 20 802.11n wireless clients (see "How we did it".

In these tests, access points used only 5-GHz radios; in later tests described below, we turned on both 2.4- and 5-GHz radios and used a mix of 802.11n and non-802.11n clients. For now, though, the focus was on pure 802.11n throughput and latency.

Using the VeriWave WaveTest WT-90 traffic generator/analyzer, we pounded each set of devices with short, midsize and large frames (in separate tests) to find the highest rate where the access points forwarded all traffic without loss – the throughput rate.

One significant finding is that traffic direction matters. In separate tests with frames moving downstream from (gigabit Ethernet to wireless clients), upstream and bi-directionally, throughput rates varied widely.

In the downstream tests, Siemens' access points moved large frames the fastest among all systems. Overall system throughput was greater than 2Gbps, or nearly 259Mbps on each of eight access points. Overall system throughput for the other three vendors' access points when handling large frames was between 1.89G and 1.94Gbps.

Chart on throughput.

Upstream traffic generally achieved the highest rates. The Aerohive access points came out tops in the 802.11n upstream tests, moving traffic fastest for all three frame sizes. In fact, the HiveAP 340s' throughput for large frames headed upstream – 2.109Gbps, or nearly 264Mbps per access point – was the fastest data rate we recorded in the entire test.

These results are good news for all vendors: Even the slowest result is dramatically higher than the roughly 25Mbps per access point available from current 802.11g or 802.11a products. In the best case, throughput is better than 10 times higher with enterprise-grade 802.11n gear.

While access points generally moved large frames close to the theoretical maximum rates in the downstream and upstream tests, it was a different story with bidirectional traffic. Aerohive's access points were fastest by far, moving large frames bidirectionally around 2.7 times faster than the slowest access points (from Siemens).

But the top rate bidirectionally, even for the Aerohive access points, was only around 70% as fast as its upstream-only rate. Limitations in internal bus capacity, direct memory access transfer capacity and memory optimization may explain the difference in rates.

So far we've concentrated on large-frame testing, which generally produces the highest rates. Throughput differences for short and mid-length frames were more pronounced than with large frames; in some cases we weren't even able to complete throughput testing.

Here, packet-processing horsepower is the key determinant of throughput, and that in turn depends on the access point's CPU and the firmware that shuttles frames to and from the CPU. Given a heavy enough load, an access point may degrade VoIP or video responsiveness, slow TCP connections or even become unresponsive in testing.

We're presenting throughput in both bits and frames per second, allowing you to see the effect of packet-processing limits. With short frames – which are the most common type on enterprise networks, mostly because of TCP acknowledgements – frame rates varied widely between vendors.

The Aerohive access points were fastest at moving short and midsize frames downstream, in both cases by a wider margin over other vendors than in the large-frame tests. However, no system came anywhere close to the theoretical limit of around 1.5 million frames per second in the short-frame tests. Because many applications use short frames – including VoIP and especially anything running over TCP (for acknowledgements) – lower throughput with shorter frames can and likely will have adverse effect on application performance.

We were unable to complete upstream testing with the Motorola and Siemens access points. Two issues with the software Motorola supplied for its AP-7131 made it impossible for us to obtain throughput results in testing with 88-byte frames. After we completed testing, Motorola said it fixed these issues and obtained significantly improved results with a new software version now available to customers. We did not verify this assertion.

We also were unable to obtain throughput results with the software version Siemens supplied with its HiPath Wireless AP 3610, not only with 88-byte frames but also with 512-byte frames in both upstream and bidirectional tests.

It's an industry-standard practice to find the throughput rate using a binary search, offering varying loads in successive iterations. The Siemens access point would become unresponsive after receiving heavy loads from the VeriWave test instrument, making results from all subsequent test iterations invalid. In tests with 88-byte frames, the Siemens access points rebooted in some cases.

Siemens says this problem does not occur on customer networks, and that its access points wouldn't have reset if we'd disabled a watchdog timer in the access point software. Throughput tests are by definition stress tests, and aren't intended to represent some definition of "real world"; the WiMix tests, discussed later in this article are a better representation of the traffic enterprises actually handle. Also, the fact that access points became unresponsive or rebooted troubles us; that shouldn't happen no matter how heavy a load users throw at them.

We also measured latency and jitter (latency variation) for 802.11n access points. Minimizing delay obviously matters for time-sensitive applications such as VoIP and video, but it also affects anything running over TCP – and that's almost all traffic in enterprise networks. Delay a packet too long, and TCP stacks can respond with rate throttling, retransmissions and possibly even connection timeouts.

Chart showing latency of the products.

Across the board, latency and jitter were generally highest when moving downstream, from Ethernet to wireless clients. This is to be expected given that frames move from a faster medium to a slower one in this direction.

Bluesocket's access points delayed packets the shortest amount of time in most of the downlink and uplink tests, often by wide margins over other access points for downstream traffic. Also, the difference between average and maximum delay was generally lower for Bluesocket access points than for those from other vendors.

That said, average latencies for all access points were on the high side. Real-time application performance begins to suffer with delays of 10 to 20 millisec or more, and we measured many instances of much larger delays. For its part, Aerohive noted that we measured latency only at the throughput rate (as RFC 2544 requires us to do) but not with lower loads in which latency and jitter can be far lower. This also may have reduced the sizable differences between average and maximum latencies.

Speaking of those differences, the Aerohive and Siemens access points exhibited very large maximum delays in some tests involving 512- and 1,518-byte frames. In one case, the Aerohive access point delayed a few packets for 18 seconds, easily long enough to disrupt virtually any application. In this case, an issue with firmware caused the access point to buffer some packets from a previous test run until we offered new traffic. During our tests, Aerohive supplied a new firmware version that corrected most, but not all, instances of this behavior. Again, latency and jitter may be lower with lower loads.

Maximum latency for the Siemens access point was also up above 1 second in some cases. Siemens again noted that this was a stress test.

Mixed-mode throughput and latency

While these tests offer a thorough picture of 802.11n performance, few if any enterprises will deploy pure 802.11n-only networks on day one; instead, they're likely to deploy a mix of 802.11n and legacy wireless clients.

To get a sense of how access points would handle multiple client types, we asked vendors to enable both 2.4- and 5-GHz radios in their access points. Then we associated 16 802.11n clients to each radio, plus four 802.11g clients to the 2.4-GHz radio and four 802.11a clients to the 5-GHz radio. We did not use legacy 802.11b clients in this test because they're becoming increasingly scarce and they would have dragged down rates for all clients.

CPU processing power and bus bottlenecks were even bigger factors in these mixed-mode tests than in the pure 802.11n setups. That's because access points must service frames headed to and from two radios rather than one. Because of this, and because legacy clients run at slower rates (thus keeping 802.11n clients off the air at least part of the time), throughput was generally lower in these tests. In fact, when averaging all bit rates for all vendors, throughput in the mixed-mode tests was only 24% that of the average of all results from the pure 802.11n throughput tests.

Chart showing performance in mixed mode.

One thing that did carry over from the 802.11n-only tests was the top performance of Aerohive's access points, at least with midsize and large frames. The Aerohive access points were generally fastest in both downstream, upstream and bidirectional tests. But a glitch with the software image we tested prevented us from testing the Aerohive access points with 88-byte frames in the mixed-mode configuration. Aerohive says it has since fixed the software issue, but we did not verify this. The same is true for Motorola: The vendor says it corrected software issues after our test window, but we did not verify this.

Rates were generally highest for upstream traffic, as in the 802.11n tests. However, unlike the 802.11n tests, rates tailed off dramatically both for bidirectional and downstream traffic. Further, even the very highest upstream result was nearly 30% lower than with 802.11n clients only, despite two radios being active and thus twice as much capacity theoretically being available. (Read a recent test on wireless LAN management gear.)

Because few applications involve traffic in one direction only, and since few enterprises will run 802.11n alone on day one, our results suggest that network managers shouldn't expect the same high rates from mixed-mode deployments as with pure 802.11n setups. To be sure, rates in these tests are again far higher than would be possible with 802.11g or 11a access points; but they're nowhere near as fast as in the pure 802.11n tests.

One possible counter argument is that network designers should deploy all 802.11n clients on the 5-GHz radio and dedicate the 2.4 GHz radio to legacy 802.11g and 802.11b clients. This may be a sound network design practice, but a quick look at frame rates belies the argument that overall performance would improve. None of the systems came anywhere close to meeting theoretical limits of around 1.5 million frames per second with short frames or 178,000 frames per second with long frames (measured across eight access points). The big gap between theoretical and observed frame rates suggests that access points' CPUs and buses will be limiting factors well before the systems hit any bandwidth bottlenecks.

We also measured latency and jitter for mixed-mode traffic. Average latency was generally slightly lower than in the 802.11n tests, not surprising considering the lower loads involved. Once again there were big differences between average and maximum latencies, with the latter jumping well above 1 second in three cases involving the Aerohive and Siemens access points.

Chart showing mixed mode jitter.

Certainly these high maximum delays can adversely affect applications. However, because jitter remained relatively low across the board it's likely that the high maximum latencies were caused by only a few stray frames (something we verified in the case of Aerohive's access points by examining capture files).

Power grabs

Power consumption is a key concern with 802.11n. The central question is whether the new 802.11n access points will draw more than the 12.95-watt maximum permitted in the existing 802.3af specification for power over Ethernet (PoE).

Some of the more power-hungry access points may need even more juice than the 15.4-watt maximum that today's PoE power sources provide. (The 2.45-watt difference between device and power source limits exists to account for power dissipation in cables and voltage fluctuations; in practice, actual dissipation is much smaller, even over maximum-length cables, typically a few hundred milliwatts.)

Power usage is a major issue for some enterprises, especially those that only recently put PoE switches or injectors in place. For network designers, the question is whether it's necessary to trade off some performance to stay within the power budget. The IEEE is working on a higher-wattage version of the PoE spec, but work isn't yet complete.

To determine maximum power draw, we enabled both radios on one access point from each vendor and associated 20 802.11n clients to each radio. We also configured the access points to use channel bonding, ensuring maximum bandwidth and thus the highest possible power draw.

Working with a Fluke multimeter and probe, we took three measurements: Once with no traffic to determine power usage when idle, and again with downstream flows of 88- and 1,518-byte frames, each offered at the throughput rate. The Fluke multimeter recorded the maximum power used in each test.

Clearly, the greenest of all the access points came from Siemens. When idle, the Siemens' HiPath Wireless AP 3620 used only 6.3 watts, less than half the limit for the existing PoE spec. Even under the heaviest load, the Siemens access point drew less than 11 watts, again well under the 12.95-watt limit. These results validate Siemens' claim that its 802.11n gear does not require a forklift upgrade of existing PoE infrastructure.

Chart showing power consumption.

At the other end of the spectrum, the Aerohive HiveAP 340 was over the 12.95-watt line in all three tests, drawing as much as 18 watts when forwarding 1518-byte frames. Aerohive access points have a "SmartPoE" feature that can dynamically adjust power consumption to match that available from an 802.3af-compatible power source, but we did not test this. After reviewing its PoE test results, Aerohive said SmartPoE would have resulted in significantly less power draw, roughly equivalent forwarding rates and a smaller coverage area, but again we did not verify this.

Motorola's AP-7131 also exceeded the current PoE limit but only when handling 1,518-byte frames. While it's probably possible to run the Motorola access points with existing PoE gear (because of the 2.45 extra watts of headroom between devices and power supplies, it's safest to use new "PoE-plus" power sources with either the Aerohive or Motorola access points to supply power at levels above 15.4 watts).

As noted, there are power/performance tradeoffs involved in assessing PoE. Traffic rates for Aerohive's access points were much higher than others in this event, but then again so was power usage. For enterprises looking for the absolute fastest system, adding new power supplies may be worthwhile. On the other hand, enterprises looking to leverage existing PoE infrastructure are safe with either the Bluesocket or Siemens access points, as both stayed under the 12.95-watt limit in our tests.

The Siemens access points offered the best combination of power and performance: They delivered more traffic faster per watt used than any other system tested, while at the same time staying well under the power budgets of existing PoE gear.

Even though all systems implement the same 802.11n protocol, and use the same Atheros radio chipset, we saw very different results in testing. The new 802.11n systems already offer vastly higher performance than their predecessors, and with further refinement of their software they could represent a real step toward making wireless the default when it comes to enterprise connectivity. (Compare other wireless products in Network World's buyers' guide.)

Newman is president of Network Test, an independent test lab in Westlake Village, Calif. He can be reached at dnewman@networktest.com.

Thanks: Network World gratefully acknowledges the support of test equipment vendors that made this project possible. VeriWave supplied not only its WaveTest WT-90 test system but also considerable engineering support. Those at VeriWave supporting this project included Tom Alexander, Tim Bennington-Davis, Carl Brown, Eran Karoly, Jerry Perser and Hardev Soor. Thanks too to Fluke, which provided a Fluke 87V multimeter and i30 DC clamp meter for measuring power consumption.

NW Lab Alliance

Newman is also a member of the Network World Lab Alliance, a cooperative of the premier reviewers in the network industry each bringing to bear years of practical experience on every review. For more Lab Alliance information, including what it takes to become a member, go to www.networkworld.com/alliance.

Join the discussion
Be the first to comment on this article. Our Commenting Policies