802.11n gear 10 times faster than current Wi-Fi offerings
Throughput tops 250Mbps in groundbreaking test; Bluesocket wins
Testing 802.11n wireless LAN gear for enterprises means thinking big.
With the latest version of Wi-Fi promising vastly higher data rates compared with previous incarnations, a couple of laptops running a few FTP sessions through a single access point won't do.
Instead, Network World set up the largest public 802.11n test ever conducted. We invited all enterprise Wi-Fi vendors to supply not one but eight 802.11n access points, along with controllers if needed. Working with test instrument vendor VeriWave, we crafted test traffic from hundreds (and in some cases thousands) of virtual clients to see just how high the new 802.11n systems would scale, both in pure 802.11n settings and also with a mix of 802.11n and legacy clients. In all these tests, the goal was to determine 802.11n performance in an enterprise context.
Four vendors took us up on the challenge: Aerohive, Bluesocket, Motorola and Siemens. Some big names declined to take part, leaving us to wonder how ready their 802.11n offerings actually are (see "Big players missing in action"). We stand at the ready to test these products against our existing methodology, should they become comfortable enough to place their gear in a public test.
The vendors that did participate proved the adage that 90% of life is about showing up. Multiple vendors cracked the 2-Gbps mark in pure 802.11n throughput tests, pushing data rates of 250Mbps or more per access point. That's around a 10fold improvement in throughput over existing 802.11g and 802.11a access points, which makes a compelling case for considering 802.11n as a real alternative to wired connectivity to the enterprise.
Power is a big concern with the new systems, especially because some may need more juice than standards-based power-over-Ethernet (PoE) switches can supply. Some systems stayed within the limits of current PoE specs, while others may require upgrades to larger power supplies.
The new systems also showed rough spots in a few places. We couldn't complete throughput tests in some cases because access points became unresponsive or even rebooted. That's especially interesting given that all systems tested are built around the same Atheros radio module. The very different results speak to the different optimizations each vendor has done in working with the Atheros radios.
In the end, Bluesocket's BlueSecure access points offers the best combination of performance, power efficiency and features. Bluesocket's system wasn't the fastest we tested, but it exhibited consistently low latency and jitter, and it didn't suffer from some of the software bugs that hampered testing of other systems.
Each of the other systems had their own merits: Siemens' HiPath access points are extremely efficient with power, while Aerohive Networks' HiveAP offers an innovative alternative to controller-based designs and very high throughput. Motorola's new AP-7131 is still a work in progress and needs further software tweaks, but it too offers a unique design that soon will support up to three radios on the same access point, which will enable enterprises to use Wi-Fi and WiMAX on the same access point.
802.11n throughput and latency
We assessed all systems in terms of pure 802.11n performance; mixed-mode performance handling both 802.11n and legacy 11a and 11g clients; performance with a mix of common enterprise application types (our "WiMix" test, in which wireless clients handle a mixture of different frame sizes; power consumption; and system features.
"How fast will it go?" is understandably the first question when it comes to assessing 802.11n technology. We sought to answer that question by measuring throughput across eight access points, each moving traffic between 20 wired and 20 802.11n wireless clients (see "How we did it".
In these tests, access points used only 5-GHz radios; in later tests described below, we turned on both 2.4- and 5-GHz radios and used a mix of 802.11n and non-802.11n clients. For now, though, the focus was on pure 802.11n throughput and latency.
Using the VeriWave WaveTest WT-90 traffic generator/analyzer, we pounded each set of devices with short, midsize and large frames (in separate tests) to find the highest rate where the access points forwarded all traffic without loss – the throughput rate.
One significant finding is that traffic direction matters. In separate tests with frames moving downstream from (gigabit Ethernet to wireless clients), upstream and bi-directionally, throughput rates varied widely.
In the downstream tests, Siemens' access points moved large frames the fastest among all systems. Overall system throughput was greater than 2Gbps, or nearly 259Mbps on each of eight access points. Overall system throughput for the other three vendors' access points when handling large frames was between 1.89G and 1.94Gbps.
Upstream traffic generally achieved the highest rates. The Aerohive access points came out tops in the 802.11n upstream tests, moving traffic fastest for all three frame sizes. In fact, the HiveAP 340s' throughput for large frames headed upstream – 2.109Gbps, or nearly 264Mbps per access point – was the fastest data rate we recorded in the entire test.
These results are good news for all vendors: Even the slowest result is dramatically higher than the roughly 25Mbps per access point available from current 802.11g or 802.11a products. In the best case, throughput is better than 10 times higher with enterprise-grade 802.11n gear.
While access points generally moved large frames close to the theoretical maximum rates in the downstream and upstream tests, it was a different story with bidirectional traffic. Aerohive's access points were fastest by far, moving large frames bidirectionally around 2.7 times faster than the slowest access points (from Siemens).
But the top rate bidirectionally, even for the Aerohive access points, was only around 70% as fast as its upstream-only rate. Limitations in internal bus capacity, direct memory access transfer capacity and memory optimization may explain the difference in rates.
So far we've concentrated on large-frame testing, which generally produces the highest rates. Throughput differences for short and mid-length frames were more pronounced than with large frames; in some cases we weren't even able to complete throughput testing.
Here, packet-processing horsepower is the key determinant of throughput, and that in turn depends on the access point's CPU and the firmware that shuttles frames to and from the CPU. Given a heavy enough load, an access point may degrade VoIP or video responsiveness, slow TCP connections or even become unresponsive in testing.
We're presenting throughput in both bits and frames per second, allowing you to see the effect of packet-processing limits. With short frames – which are the most common type on enterprise networks, mostly because of TCP acknowledgements – frame rates varied widely between vendors.
The Aerohive access points were fastest at moving short and midsize frames downstream, in both cases by a wider margin over other vendors than in the large-frame tests. However, no system came anywhere close to the theoretical limit of around 1.5 million frames per second in the short-frame tests. Because many applications use short frames – including VoIP and especially anything running over TCP (for acknowledgements) – lower throughput with shorter frames can and likely will have adverse effect on application performance.
We were unable to complete upstream testing with the Motorola and Siemens access points. Two issues with the software Motorola supplied for its AP-7131 made it impossible for us to obtain throughput results in testing with 88-byte frames. After we completed testing, Motorola said it fixed these issues and obtained significantly improved results with a new software version now available to customers. We did not verify this assertion.
We also were unable to obtain throughput results with the software version Siemens supplied with its HiPath Wireless AP 3610, not only with 88-byte frames but also with 512-byte frames in both upstream and bidirectional tests.
It's an industry-standard practice to find the throughput rate using a binary search, offering varying loads in successive iterations. The Siemens access point would become unresponsive after receiving heavy loads from the VeriWave test instrument, making results from all subsequent test iterations invalid. In tests with 88-byte frames, the Siemens access points rebooted in some cases.
Siemens says this problem does not occur on customer networks, and that its access points wouldn't have reset if we'd disabled a watchdog timer in the access point software. Throughput tests are by definition stress tests, and aren't intended to represent some definition of "real world"; the WiMix tests, discussed later in this article are a better representation of the traffic enterprises actually handle. Also, the fact that access points became unresponsive or rebooted troubles us; that shouldn't happen no matter how heavy a load users throw at them.
We also measured latency and jitter (latency variation) for 802.11n access points. Minimizing delay obviously matters for time-sensitive applications such as VoIP and video, but it also affects anything running over TCP – and that's almost all traffic in enterprise networks. Delay a packet too long, and TCP stacks can respond with rate throttling, retransmissions and possibly even connection timeouts.
Across the board, latency and jitter were generally highest when moving downstream, from Ethernet to wireless clients. This is to be expected given that frames move from a faster medium to a slower one in this direction.
Bluesocket's access points delayed packets the shortest amount of time in most of the downlink and uplink tests, often by wide margins over other access points for downstream traffic. Also, the difference between average and maximum delay was generally lower for Bluesocket access points than for those from other vendors.
That said, average latencies for all access points were on the high side. Real-time application performance begins to suffer with delays of 10 to 20 millisec or more, and we measured many instances of much larger delays. For its part, Aerohive noted that we measured latency only at the throughput rate (as RFC 2544 requires us to do) but not with lower loads in which latency and jitter can be far lower. This also may have reduced the sizable differences between average and maximum latencies.
Speaking of those differences, the Aerohive and Siemens access points exhibited very large maximum delays in some tests involving 512- and 1,518-byte frames. In one case, the Aerohive access point delayed a few packets for 18 seconds, easily long enough to disrupt virtually any application. In this case, an issue with firmware caused the access point to buffer some packets from a previous test run until we offered new traffic. During our tests, Aerohive supplied a new firmware version that corrected most, but not all, instances of this behavior. Again, latency and jitter may be lower with lower loads.
Maximum latency for the Siemens access point was also up above 1 second in some cases. Siemens again noted that this was a stress test.
Mixed-mode throughput and latency
While these tests offer a thorough picture of 802.11n performance, few if any enterprises will deploy pure 802.11n-only networks on day one; instead, they're likely to deploy a mix of 802.11n and legacy wireless clients.
To get a sense of how access points would handle multiple client types, we asked vendors to enable both 2.4- and 5-GHz radios in their access points. Then we associated 16 802.11n clients to each radio, plus four 802.11g clients to the 2.4-GHz radio and four 802.11a clients to the 5-GHz radio. We did not use legacy 802.11b clients in this test because they're becoming increasingly scarce and they would have dragged down rates for all clients.
CPU processing power and bus bottlenecks were even bigger factors in these mixed-mode tests than in the pure 802.11n setups. That's because access points must service frames headed to and from two radios rather than one. Because of this, and because legacy clients run at slower rates (thus keeping 802.11n clients off the air at least part of the time), throughput was generally lower in these tests. In fact, when averaging all bit rates for all vendors, throughput in the mixed-mode tests was only 24% that of the average of all results from the pure 802.11n throughput tests.
One thing that did carry over from the 802.11n-only tests was the top performance of Aerohive's access points, at least with midsize and large frames. The Aerohive access points were generally fastest in both downstream, upstream and bidirectional tests. But a glitch with the software image we tested prevented us from testing the Aerohive access points with 88-byte frames in the mixed-mode configuration. Aerohive says it has since fixed the software issue, but we did not verify this. The same is true for Motorola: The vendor says it corrected software issues after our test window, but we did not verify this.