Screaming MIMO

Blazing performance of 802.11n SOHO gear bodes well for enterprise MIMO

New 802.11n Draft 2 products targeted at the small-office/home-office market show a dramatic improvement in interoperability and a slight improvement in throughput over the Draft 1 products we tested last summer.

What's next for 802.11n?

How we tested MIMO gear

Netresults: Mimo test

Archive of Network World tests

Subscribe to the Network Product Test Results newsletter


With major vendors, such as Cisco and Meru Networks, beginning to ship enterprise-grade MIMO gear (which we plan to test later this year), Farpoint Group (the wireless networking consultancy that conducted this round of testing) predicts that a massive shift to 802.11n is going to materialize over the next year, with complete dominance of the market by this technology by early 2009.

When we tested 802.11n gear last year, we found a disappointingly broad range of variability in throughput and generally poor interoperability among so-called "Draft 1-compliant" products -- not an entirely unexpected result given the lack of any form of third-party interoperability certification.

This year, we tested wireless throughput and interoperability for residential/SOHO/small-business products based on Draft 2 of the IEEE 802.11n wireless LAN (WLAN) standard and certified by the Wi-Fi Alliance under its new Draft-n program.

This year's products -- representing the latest from Belkin, D-Link, Linksys, Netgear and SMC -- with a few exceptions, did much better in the interoperability department, and we occasionally saw performance better than the best of last year's crop. It's clear, though, that we still have a long way to go to get to interoperability levels on a par with those common in 802.11g and 802.11a deployments.

In a nutshell, over the course of testing, we found:

* Excellent throughput performance in both the Linksys and SMC products, though each excelled on different tests -- Linksys in TCP, with 97.5Mbps, and SMC in User Datagram Protocol (UDP), with an amazing 122.7Mbps. Interestingly, Linksys only got to 67.33Mbps on UDP, and SMC produced a subpar 41.82Mbps on TCP.

Netgear came in second in UDP and third in TCP, and Belkin has some work to do in the driver department -- the Belkin client had a hard time maintaining connectivity with any of the routers, and couldn't complete the UDP tests.

* The Linksys router showed very good TCP interoperability with all the clients we tested and led the pack with an average across all clients of 79Mbps -- a good 25Mbps ahead of its nearest competitor. On the other end of the scale, the D-Link router averaged 26.9Mbps, and only 36.3Mbps when paired with its own client. The best average client performance across all routers was Netgear, with 62.2Mbps, and at the other end was SMC, with only 35.8Mbps. The interoperability problems we discovered last year are clearly not yet solved -- but they also didn't affect our overall scoring, as heterogeneous configurations are not the preferred solution in a SoHo environment.

* Netgear took top honors in video reception range-limit tests, demonstrating the ability to transmit watchable video at an astounding 245 feet. But Belkin and SMC weren't far behind at 225 feet.

Why the variability in results, both between product pairings and in the individual benchmark runs? There are major differences in the designs of the systems tested, and especially in the WLAN chipsets employed. Drivers also make a huge difference in performance and reliability. Combine all of this with the vagaries of radio and wireless communications in general, and a high degree of variability in performance can be expected. We also need to point out that all of the results are well below the theoretical maximum of 300Mbps for these products -- and, while technically correct, we still question the wisdom of misleading unsophisticated consumers with what are nothing more than engineering specs.

What we tested

The intention of this year's test was to examine any Draft-n products that met two simple criteria. First, the products tested must have received Wi-Fi Draft-n certification; and second, any tested access point/router was required to have at least one Gigabit Ethernet port.

The reason for the former is obvious, as interoperability was one of our goals. The Gigabit Ethernet requirement, on the other hand, was essential because we felt that Draft 2 products, based on preliminary testing, could easily exceed Layer-7 throughput of 100Mbps, placing any products with only a 10/100Mbps port at a major disadvantage.

Testing this year was performed in a residence not far from Network World's headquarters in Southborough, Mass. Our testing focused on parameters indicative of the performance that residential/SOHO/SMB users would care about in their own installations:

1. Throughput -- while wireless performance is sensitive to the distance between the endpoints (hereafter referred to as range), it's difficult to test a variety of rate-vs.-range conditions in the typical dwelling. So we performed a set of tests seeking the maximum TCP and UDP throughput of a given configuration in a fairly challenging test geometry -- straight up through two floors.

2. Interoperability -- We performed our TCP throughput test with every combination of client and access point, looking for any incompatibilities. Note that while Wi-Fi certification does specify interoperability, it does not test for throughput -- so we did.

3. Video delivery -- In addition to using our benchmark software to simulate a UDP videostream, thus performing a quantitative benchmark, we also performed a qualitative/subjective analysis of video performance using a live high-definition videostream. We tested for effective range using the latter test.

We followed our standard procedures for benchmarking WLANs, which can be found in Farpoint Group Technical Note 2006-314.1, Benchmarking Wireless LANs: Recommended Practice. This included the use of a turntable for the wireless computer on the server end of the connection in the throughput testing so as to minimize any impact from suboptimal antenna orientation. We used a standard and free benchmarking application, Iperf, for throughput tests. Iperf is not an absolute measure of performance -- it's a benchmarking tool, after all, and our objective was simply to put the same load on each test configuration in a reproducible fashion (for full test methodology, see "How we did it").

Does anyone really care about throughput?

The question has come up -- does anyone really care about wireless LAN performance benchmarks? Given the high degree of variability in both configurability and the contribution of the environment to results, do benchmarks really tell us anything?

I'd answer a great big "yes" to that one. While a buying decision will be based on many factors (hence, the ratings system we use in this story), the performance we see in benchmarks can be a significant predictor of how well a given product will work in a production application. 

Products with good performance tend to have more robust radios and overall system implementations, and will likely work better under a variety of operating conditions. Given that production radio environments will become more complex, primarily because of interference, a robust radio should be a key selling point. That's why stress testing, using longer range and turntables, for example, tells us a lot about how good a radio will perform in the often mission-critical real world.

Test operations and results

We began testing by running the Iperf throughput benchmarks with the internal wireless card on an IBM X40 notebook, an Atheros a/b/g Mini PCI Adapter II that replaced the original Intel b/g Centrino part, connecting to a Proxim AP-4000 dual-radio a/b/g access point that we regularly use. Note this configuration supported 802.11g only -- not Draft n -- thus providing a baseline. Security for these runs used WPA, as opposed to WPA2. Needless to say, all of the 802.11n products in homogenous configurations yielded much higher performance than this baseline.

Just for fun, we then ran the same two tests using last year's winners, the Asus WL-106gM PC card and WL-566gM router. Our baseline Atheros/Proxim combination averaged 23.7Mbps using TCP, pretty typical for high-quality 802.11g components, and the Asus provided 86.6Mbps -- the second-best overall result of these tests, and not bad for last year's model. Note this number is not the same as the result we reported last year; this year's test conditions and geometry were different, after all.

We then proceeded to test every combination of the five client adapters and routers that were submitted by the vendors (see chart below). 

Tracking wireless TCP performance

With respect to homogeneous pairs, we noted some really excellent performance numbers, especially from Linksys from the server to the client, with average performance here of an astonishing 129Mbps -- by far the best of the bunch. Linksys was also the leader when results for communications in both directions were averaged, posting an average of 97.5Mbps throughput, just edging the Netgear products with an also-excellent average throughput of 96Mbps.

In the UDP race, which is designed to simulate video traffic under controlled conditions, SMC was far and away the winner, turning in a blistering 122.7Mbps upstream/downstream average (reported on the server end, eliminating packets that never made it) over three runs (see chart below). 

Tracking wireless UDP performance

Most of the other products turned in performance in the 70Mbps range, though Belkin could not complete this test. The Belkin client had problems staying connected to all of the routers.

In the heterogeneous configurations where we tested all adapters against all routers, again testing only TCP throughput, overall performance was disappointing. It ranged from a low of 1.1Mbps for the SMC client/Netgear router to a high of 89.5Mbps for the Belkin client/Linksys router pair, with most products in the area of 50+ Mbps. 

It's interesting to note, though, that in many cases, heterogeneous TCP throughput was better than the homogeneous performance of the laggards in this test. We should never, of course, expect that all wireless equipment will perform the same; such is not, after all, the case with wired devices. But we were hoping for a better heterogeneous interoperability showing this time given Wi-Fi certification. Some heterogeneous configurations, such as the SMC client with the Netgear router (again, 1.1Mbps) and the Linksys client with the D-Link router (9.4Mbps), were positively abysmal.

Qualitative video testing using the Slingbox actually turned into a quantitative test as we stretched the range of each homogeneous configuration as far as we could, ultimately yielding a maximum physical distance at which we could still receive high-quality video. A particularly good performance in this test was turned in by Netgear, which registered a whopping 245 feet between endpoints before the signal degraded to unwatchability. Belkin and SMC weren't far behind, though, each having a 225-foot range.

In determining our overall ratings, though, we considered a number of factors beyond throughput, including features, manageability, setup and documentation. All of the products tested, both routers and clients, were easy to set up and use, requiring only a few minutes each; and we'd recommend any of their utilities over the so-called Windows Zero Configuration, which is now part of Windows. While newbies and technophobes would find any of them daunting, no one with even a modest background in networking would have difficulty with any of them.

However, we do have a few points of note on these other features within each product combination that could prove distinctive in a buying decision.

1 2 Page 1
Page 1 of 2
Now read: Getting grounded in IoT