My recent article on testing three-stream WLANs provoked a number of interesting discussions with colleagues and others – and a number of ideas for what might be next in this vital field. I have spent literally hours discussing my recent series of three-stream .11n benchmarks with colleagues and others, clarifying various fine points and exploring future possibilities. It’s gratifying that WLAN performance measurement and evaluation, one of the most difficult areas of benchmarking due to the presence of way more variables than equations, continues to raise such a high level of interest. As I noted, even I was surprised by the results in the particular case, and while these are once again not suitable by themselves for making a purchasing decision or establishing any Great Truth, they do tell us a lot about general progress in WLAN technology. That’s why testing remains important, and will for some time to come.Even though I used very sophisticated (and expensive – as in beyond the reach of most people who want to do this kind of work) test equipment, a lot of otherwise valuable information was missing. One proposal I want to make is for the development of a benchmarking tool that is a hybrid of an 802.11 packet-capture app and Layer-3 (or above) benchmark. It would be interesting to watch what clients (and infrastructure) are doing as they operate under load. This could be very useful in understanding the behavior of, and perhaps tuning, the firmware of specific implementations, and also in refining the protocol itself. I’ve seen truly bizarre client behavior at times (like a stationary client probing with an RSSI of -30, for example), and it would be good to know when and why this occurs.Another possibility, one I wrote about many years ago, is what I call virtual benchmarking, wherein we build models of the behavior of various implementations and then subject them to various virtual workloads under simulated operating conditions. This, I know, sounds a bit far-fetched at present, but I think over time we’ll get valuable data from this approach.Before we get to that, though, I have a renewed interest in using test chambers and channel emulators as benchmarking tools. The key problem here has historically been the lack of good channel emulators that simulate what happens to a radio wave as it moves through space, but this challenge is rapidly being addressed. As an example, we have the octoBox from octoScope (a company founded by legendary engineer Fanny Mlinarsky; she is also the founder of test-equipment leader Azimuth Systems), and its associated octoFade channel modeling software. With this equipment it should be possible to do comparative testing in a completely-repeatable environment, and, ultimately, to gather the data required to do cost-effective virtual benchmarking. I’m hoping to have the opportunity to try out the octoBox in the near future. And the ultimate might be a proposal I’ve made to the Wi-Fi Alliance and various leading industry players: let’s build a permanent, instrumented benchmarking/testing facility and make it available to players at all levels of the food chain – researchers, chip developers, finished-goods/systems vendors, and even end users – for all manner of testing work. I envision what I’ve been calling a BEB – a big, (otherwise) empty building, with appropriate office furniture – filled with test equipment and suitable for the evaluation of any wireless-LAN equipment or application.In the interim, though, what I really want now is decent (and low-cost) client emulator software that can simulate multiple real clients on a single notebook PC or similar device, and a pocket-sized server for the other end of the benchmark connection. Appropriate test equipment is still way too expensive, and those of us who are often called upon to do real-world post-installation performance verification need something that is simple to use and travels well. And, by the way, we continue to maintain our policy here of not doing comparative testing of enterprise-class without the expressed knowledge and consent of all parties involved. I’ve made the mistake implied here in the past, and won’t go there again; there are simply too many variables in contemporary enterprise-class systems for an independent benchmarking organization to consider. One little tweak could dramatically change the outcome in any given case. The need for reputable, third-party sponsorship, then, has never been greater – and Network World is today one of the few organizations that continues to pick up this challenge. Related content opinion 3D Gestures for Mobile Devices: IK Multimedia’s iRing Here’s an inexpensive new product that promises hours of fun for musicians of all types – but it might also point the way to 3D interfaces for mobile devices. By Craig Mathias Jul 22, 2014 3 mins Smartphones opinion 802.11ac Wave 2: Sooner Than You Think, Part 2 Xirrus’ recent almost-announcement is a further indication that Wave 2 is going to get started in the market this year. By Craig Mathias Jul 10, 2014 3 mins Wi-Fi opinion Motorola’s MPact: A Hybrid Indoor Positioning Platform In-building locationing and tracking is really heating up, and Motorola’s MPact platform, combining Wi-Fi with Bluetooth Low Energy beacons, is an interesting approach for retailers – and perhaps beyond. By Craig Mathias May 20, 2014 4 mins Bluetooth Wi-Fi opinion IEEE's Rock Stars of Mobile Cloud: No Rock, But a Few Stars I attended the IEEE’s recent Rock Stars of Mobile Cloud event in Boston. Here are few interesting observations from a very diverse set of presenters. By Craig Mathias May 16, 2014 4 mins Smartphones Wi-Fi APIs Podcasts Videos Resources Events NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe