Americas

  • United States
craig mathias
Principal

Testing Wireless LANs: What’s Next?

Opinion
Apr 13, 20124 mins
Wi-Fi

My recent article on testing three-stream WLANs provoked a number of interesting discussions with colleagues and others – and a number of ideas for what might be next in this vital field.

I have spent literally hours discussing my recent series of three-stream .11n benchmarks with colleagues and others, clarifying various fine points and exploring future possibilities. It’s gratifying that WLAN performance measurement and evaluation, one of the most difficult areas of benchmarking due to the presence of way more variables than equations, continues to raise such a high level of interest. As I noted, even I was surprised by the results in the particular case, and while these are once again not suitable by themselves for making a purchasing decision or establishing any Great Truth, they do tell us a lot about general progress in WLAN technology. That’s why testing remains important, and will for some time to come.

Even though I used very sophisticated (and expensive – as in beyond the reach of most people who want to do this kind of work) test equipment, a lot of otherwise valuable information was missing. One proposal I want to make is for the development of a benchmarking tool that is a hybrid of an 802.11 packet-capture app and Layer-3 (or above) benchmark. It would be interesting to watch what clients (and infrastructure) are doing as they operate under load. This could be very useful in understanding the behavior of, and perhaps tuning, the firmware of specific implementations, and also in refining the protocol itself. I’ve seen truly bizarre client behavior at times (like a stationary client probing with an RSSI of -30, for example), and it would be good to know when and why this occurs.

Another possibility, one I wrote about many years ago, is what I call virtual benchmarking, wherein we build models of the behavior of various implementations and then subject them to various virtual workloads under simulated operating conditions. This, I know, sounds a bit far-fetched at present, but I think over time we’ll get valuable data from this approach.

Before we get to that, though, I have a renewed interest in using test chambers and channel emulators as benchmarking tools. The key problem here has historically been the lack of good channel emulators that simulate what happens to a radio wave as it moves through space, but this challenge is rapidly being addressed. As an example, we have the octoBox from octoScope (a company founded by legendary engineer Fanny Mlinarsky; she is also the founder of test-equipment leader Azimuth Systems), and its associated octoFade channel modeling software. With this equipment it should be possible to do comparative testing in a completely-repeatable environment, and, ultimately, to gather the data required to do cost-effective virtual benchmarking. I’m hoping to have the opportunity to try out the octoBox in the near future.

And the ultimate might be a proposal I’ve made to the Wi-Fi Alliance and various leading industry players: let’s build a permanent, instrumented benchmarking/testing facility and make it available to players at all levels of the food chain – researchers, chip developers, finished-goods/systems vendors, and even end users – for all manner of testing work. I envision what I’ve been calling a BEB – a big, (otherwise) empty building, with appropriate office furniture – filled with test equipment and suitable for the evaluation of any wireless-LAN equipment or application.

In the interim, though, what I really want now is decent (and low-cost) client emulator software that can simulate multiple real clients on a single notebook PC or similar device, and a pocket-sized server for the other end of the benchmark connection. Appropriate test equipment is still way too expensive, and those of us who are often called upon to do real-world post-installation performance verification need something that is simple to use and travels well.

And, by the way, we continue to maintain our policy here of not doing comparative testing of enterprise-class without the expressed knowledge and consent of all parties involved. I’ve made the mistake implied here in the past, and won’t go there again; there are simply too many variables in contemporary enterprise-class systems for an independent benchmarking organization to consider. One little tweak could dramatically change the outcome in any given case. The need for reputable, third-party sponsorship, then, has never been greater – and Network World is today one of the few organizations that continues to pick up this challenge.

craig mathias
Principal

Craig J. Mathias is a principal with Farpoint Group, an advisory firm specializing in wireless networking and mobile computing. Founded in 1991, Farpoint Group works with technology developers, manufacturers, carriers and operators, enterprises, and the financial community. Craig is an internationally-recognized industry and technology analyst, consultant, conference speaker, author, columnist, and blogger. He regularly writes for Network World, CIO.com, and TechTarget. Craig holds an Sc.B. degree in Computer Science from Brown University, and is a member of the Society of Sigma Xi and the IEEE.

More from this author