• United States
by David Newman, Network World Global Test Alliance

Wireless LAN switches: Four new products brave our cutting-edge test

Sep 22, 200317 mins
Network SecurityNetwork SwitchesWi-Fi

Aruba, Airespace top our exclusive performance-based test.

When it comes to wireless LANs, which matters most: security, provisioning, management or performance? If you answered “all,” there are a bunch of WLAN switch vendors eager for your business. Each vendor in this emerging product category says it delivers all of the above — and more.

We put these claims to the test in one of the most extensive hands-on assessments of WLAN switches. Four companies submitted products: AirespaceAruba Wireless NetworksSymbol Technologies and Trapeze Networks. Other vendors we contacted did not have equipment ready at the time of our testing.

How we did it Graphic: Net results

Archive of Network World reviews

Subscribe to the Product Review newsletter

We subjected each system to security attacks, a thorough review of provisioning and management, and the first-ever published measurements of WLAN delay and jitter.

Here are some key findings:

•  While all products offer far better security than earlier WLAN products, there’s much variation in how these systems handle intrusion attempts and access control.

•  Each vendor uses different tunneling methods for moving traffic, which virtually rules out interoperability and makes troubleshooting more difficult.

•  Traffic forwarding rates fall and delay rises as access points are added to the system. However, some products can adjust dynamically to changes in the wireless environment.

•  Automated site-survey tools can ease planning, but their projections are not perfect.

Picking a winner was difficult, given that each vendor offers something unique. In the end, we declared a tie between Airespace and Aruba, and awarded each of them a World Class Award. Airespace offers a well-designed Web management interface and good security features, and it has the fastest forwarding rates. Aruba is even stronger on security, and it offers the best combination of features.

Time for taxonomy

All four vendors offer switches and access points, and all four support power-over-Ethernet connections to their access points, but the similarities end there.

Because the approaches are so different, much of the testing was spent understanding how these systems work in enterprise settings. We created a mini-RFP that asked each vendor to provision a system for three workgroups in a corporate office. The hands-on review of the responses told us a good deal about WLAN features, provisioning and management.

Aruba’s 5000 switch is a modular design that supports Layer 3 forwarding, letting it route traffic between IP subnets, while the three other vendors’ switches are fixed-port Layer 2 devices. Trapeze’s Mobility Exchange switch requires direct attachment of access points, while other vendors’ products let access points be attached to any switch, with traffic tunneled back to the WLAN switch. Trapeze says its next software version, slated for the fourth quarter, does not require direct attachment of access points to its switches.

Vendors differ on access point capabilities too. Airespace and Aruba access points supported the new 802.11g specification. The others do not, but say they’re working on it.

Sites with many handheld devices might want to check whether access points support multiple basic service set IDs (BSSID). Without this feature, PDAs and voice-over-IP (VoIP) phones that go into power-saving mode will be woken up by every broadcast message from the access point – and there are at least 10 of them every second. Airespace and Symbol offered multi-BSSID support in the products we tested, and Aruba says it’s under development.

Site survey software vs. site review

All vendors agreed that provisioning is critical. Because the radio frequency spectrum is finite, allocating bandwidth is trickier than in the wired world.

When a conventional Ethernet segment is saturated, the easy fix is to allocate more bandwidth, by increasing the port count or port speed. In the wireless world, you can’t do that, as access points can interfere with one another or be placed in “dead spots” with no coverage.

Some means of managing a WLAN rollout is needed. Aruba and Trapeze have automated site survey tools, with Trapeze’s RingMaster software the more polished of the two. Airespace and Symbol say there’s no substitute for a physical site survey – that is, walking around the premises and gauging signal strength.

With RingMaster, users supply the number of users or workgroups and the bandwidth requirements for each, and then input CADs of floor plans. RingMaster combines these inputs with building factors and signal loss formulas. The result is a work order showing contractors exactly where each access point should be placed.

Aruba’s provisioning software works in a similar way, although it accepts GIF and JPEG images, not CADs. It also takes multiple floors into account in planning access point placement. Once the plans are drawn, the Airespace and Aruba programs monitor production networks and alert network managers of discrepancies between the plans and the actual radio frequency environment.

Airespace and Symbol say there’s no substitute for a manual site survey. Based on our experience, there’s some justification for that point of view.  Trapeze’s RingMaster produced some nifty graphics, but an error we made in the floor plan – using double-pane glass instead of the single-pane glass we actually have – led the system to set up incorrect signal strength, which in turn degraded performance. Trapeze’s engineer manually tuned the access points, and performance improved.

Similarly, Aruba didn’t factor for the existence of a weak signal from an access point in a neighboring office. However, unlike the Trapeze system, the Aruba system dynamically adjusted access point signal strength for optimal performance.

Dynamic tuning raises another planning issue: radio frequency environments change over time. Thus, even if we hadn’t made an error in the drawing we fed into Trapeze’s RingMaster, the resulting work plan might not have been appropriate after changes in radio frequency conditions. Trapeze recommended we rerun the site-planning tool, while the Aruba management software did that for us.

All four switches can be managed via Web interfaces and SNMP. In terms of usability, the Airespace Web interface was the most intuitive, with well-organized divisions of various WLAN and switch management functions. Airespace also is the only vendor to support SNMP Version 3, which offers much better security than its predecessors.

We were less enamored of the Symbol and Aruba Web interfaces. Symbol’s interface requires an outdated version of the Java runtime engine and didn’t properly update system status on several occasions. Aruba’s interface is too busy, perhaps because of the relatively large number of features its switch supports. It also failed to update system status in one instance.

Setting the performance benchmark

In this test, we’ve conducted the first public measurements of WLAN delay. Past performance tests focused only on forwarding rates, even though delay can have a far more significant effect on application performance.

Also, we ran tests involving multiple access points, not just the single-access-point tests that are commonly used (even in Wi-Fi certification). After all, a key goal of WLAN switching is to extend the number of WLAN attachment points.

We used NetWarrior, a traffic generator/analyzer from QoSmetrix, and custom-developed test routines for this project (see How we did it). QoSmetrix supplied four NetWarrior units, with one acting as a generator on the wired Ethernet side and three acting as clients on the WLAN side. All NetWarriors use Global Positioning System receivers for time synchronization within 40 nanoseconds.

With the locations of the NetWarriors fixed, we asked WLAN switch vendors to place their access points anywhere within our 1,200-square-foot lab. Our choice of three access points was deliberate: In theory, three different 802.11b access points should be able to coexist in the same space without interference by using three different channels.

Because interference is inevitable, we also deliberately created congestion with two additional scenarios – one involving four vendor access points, and one involving three vendor access points plus a fourth rogue access point. (We did not conduct the rogue tests with Symbol or Trapeze, because neither can adjust access point power levels in response to rogues.)

Our results suggest the Airespace and Aruba systems – both of which feature dynamic radio frequency adaptation – do best when handling potential interference situations (our four access point test). In the forwarding rate tests using large (1,446-byte) frames, Airespace’s access points were the “hottest” of the group, posting the highest transfer rates across the board. Aruba’s rates were next highest, followed by Trapeze and Symbol.

Forward rate (M bit/sec)
Number of access points





1, 1,464-byte frames





3, 1,464-byte frames





4, 1,464-byte frames





4 (1 rogue), 1,464-byte frames



Not tested

Not tested

1, 132-byte frames





3, 132-byte frames





4, 132-byte frames





4 (1 rogue), 132-byte frames



Not tested

Not tested

Airespace’s maximum rates are well above 7M bit/sec, which is commonly understood to be 802.11b’s theoretical top end. Airespace attributes its high rates to delivery-only point coordination function (PCF), a little-used mechanism in the 802.11 standard that allows for shorter gaps between frames than those in the more widely used distributed coordination function. With smaller gaps, Airespace’s forwarding rates are higher.

Ethernet gap size has a somewhat notorious history. Some early switch makers used small gaps to get good scores in performance tests. That is not the case here – PCF is perfectly legal. The downside with PCF is other WLAN stations have less access to the wireless medium. Airespace says that traffic for most WLAN users is mainly downstream (from the access point to clients), and that clients associated to its access points always have some time in which they can send traffic upstream.

No system tripled forwarding rates in tests with three or four access points, even though three access points theoretically should not interfere with one another. Aruba’s access points came closest, with aggregate rates that averaged about 95% of triple the single-access-point numbers. Trapeze was next (91%), followed by Airespace (90%) and Symbol (81%). This raises a key issue with WLANs: Capacity will decrease as contention for spectrum grows.

No system came anywhere close to delivering the 11M bit/sec nominal rate of the 802.11b standard. The huge amount of 802.11 management and control traffic, plus contention for the radio frequency spectrum, makes up the overhead.

The delay numbers give a clearer picture of why WLAN performance suffers during periods of congestion. When we offered traffic at a low rate of about 500 frame/sec (or about 5.5M bit/sec), delay was low and consistent. When we doubled the rate to 1,000 frame/sec (or about 11M bit/sec), delay shot up into the dozens or even hundreds of millisec – which can degrade application performance.

Delay with large frames (millisec)
Number of access points





1, 1,464-byte frames, 500 frame/sec.





1, 1,464-byte frames, 1,000 frame/sec.





3, 1,464-byte frames, 500 frames/sec.





3, 1,464-byte frames, 1,000 frames/sec.





4, 1,464-byte frames, 500 frame/sec.





4, 1,464-byte frames, 1,000 frame/sec.





4 (1 rogue), 1,464-byte frames, 500 frame/sec.



Not tested

Not tested

4 (1 rogue), 1,464-byte frames, 1,000 frame/sec.



Not tested

Not tested

In the worst case, Airespace’s access point held up frames for an average of nearly 400 millisec. We also measured jitter (delay variation) of just 17 millisec, meaning most frames, not just a few outliers, experienced high delay.

In comparing average delays across all tests with long frames, Aruba’s system held up packets the least – by an average of 32.7 millisec. That’s not enough to degrade performance for most applications, but it’s still well above the 2.5-millisec best-case delay Aruba delivered in the baseline test.

One likely explanation for the big jumps in delay is the queuing and retransmission that takes place when WLANs are overloaded.

Airespace and Aruba did well in tests when three of their own access points had to contend with one rogue access point. Both vendors demonstrated the ability to keep rogues from using any significant spectrum.

We also conducted tests with short (132-byte) frames – the sort that might be used in VoIP applications. As in the tests with long frames, all products’ delays increased dramatically when multiple access points were active and when we doubled frame rates from 500 to 1,000 frame/sec.

Delay with small frames (millisec)
Number of access points





1, 132-byte frames, 500 frames/sec.





1, 132-byte frames, 1,000 frame/sec.





3, 132-byte frames, 500 frames/sec.





3, 132-byte frames, 1,000 frame/sec.





4, 132-byte frames, 500 frames/sec.





4, 132-byte frames, 1,000 frame/sec.





4 (1 rogue), 132-byte frames, 500 frame/sec.



Not tested

Not tested

4 (1 rogue), 132-byte frames, 1,000 frame/sec.



Not tested

Not tested

Delay through Symbol’s access points were much higher in the three- and four-access-point cases than the single-access-point tests. Delay also increased through Trapeze’s access points, and the Airespace access points added elevated delays in our four-access-point tests.

The short-frame delays should be viewed with three caveats in mind thought. First, no system added enough delay to degrade audio quality for VoIP traffic – by itself. However, delay is cumulative; if a WLAN system adds 45 millisec of delay and then other elements such as VoIP gateways add yet more delay, it’s easily possible to cross the 50 to 70 millisec threshold where audio quality begins to degrade.

Second, we never congested the physical link in the short-frame tests. The bandwidth needed even at 1,000 frame/sec is only about 1M bit/sec, so in theory there should not have been any congestion because of an overload. Elevated delay in these tests is most likely the result of hitting some frame-per-second processing limit of the switches and/or access points.

Finally, because we used only one type of traffic in our tests, no vendor used quality-of-service (QoS) mechanisms to give some traffic better treatment than others. All four vendors offer such mechanisms, and all four say they’ll support the emerging 802.11e spec for WLAN QoS.

Tunneling tactics rule out interoperability

Another major differentiator between products is the way they move traffic between the wired and wireless worlds. Every product we tested used a different tunneling method, making troubleshooting more difficult, virtually ruling out interoperability.

All devices use standard 802.11 framing when moving traffic between wireless clients and an access point. From the access point to the switch, it’s a different story. Aruba’s system sets up generic routing encapsulation (GRE) tunnels. Airespace doesn’t encapsulate traffic between access point and switch, but IP Security (IPSec) tunneling between switch and client is an option.

Symbol encapsulates the entire 802.11 frame in a standard 802.3 Ethernet frame, and Trapeze uses the proprietary Trapeze Access Point Access Protocol to encode traffic between access point and switch.

The list of tunneling types doesn’t end there; some vendors also use special methods to move traffic between WLAN switches. Airespace automatically sets up IPSec tunnels between its switches, while Trapeze uses IP-in-IP encapsulation. Aruba and Symbol use standard Ethernet framing to move traffic between switches, although Aruba uses a proprietary tunneling method for controlling traffic.

All these different methods can pose challenges for network troubleshooting and management. When a problem occurs, firing up a protocol analyzer like WildPackets’ AiroPeek or EtherPeek is often the best way to figure out what’s wrong. But no analyzer will recognize, for example, Symbol’s 802.11-inside-of-802.3 tunneling, because it won’t expect a wireless header after the Ethernet header. Trapeze’s proprietary encapsulation is similarly undecipherable. To its credit, Aruba has published an extension to the open source Ethereal analyzer to decode its GRE-tunneled packets.

Even if decodes are available, getting the frames in the first place might be a challenge. Only Aruba’s switch supports port mirroring – the ability to copy frames to or from one port to another for capture and decode. Mirroring is helpful in troubleshooting, but the other vendors say users will need to rely on other switches in the network if they need to capture traffic.

Securing the air

Security is an obvious concern with WLANs, and again there are big differences among products. We focused our security assessment in two areas: rogue access point handling and authentication and encryption of user traffic.

Rogue access points – those not managed by the WLAN switch or related software – pose a major potential threat. Deploying a WLAN switch might greatly reduce the rogue access point count, but it won’t eliminate the problem. Even if rogues disappear within the corporation, rogue access points might appear in places outside the corporation, such as from neighboring offices or attackers in parking lots.

Our mini-RFP asked vendors to demonstrate the ability to detect and adapt in the presence of rogues. We also ran FakeAP, an open source WLAN security assessment tool that emulates hundreds of thousands of rogue access points.

Responses to both events were decidedly mixed. On the plus side, all systems survived overnight runs of FakeAP. However, none warned network managers of an attempted attack.

Further, some systems had little or no visibility of rogues. Symbol’s system won’t recognize rogue access points unless someone tries to attach them directly to its switches. Trapeze’s switch recognized 16 rogues of the roughly 500,000 we generated using FakeAP.

Airespace’s and Aruba’s systems are more intelligent about rogue detection. Neither recognized any rogues from FakeAP because it generates only one beacon message per rogue access point, not the 10 beacons per second a real rogue would transmit. Airespace and Aruba simply treated the FakeAP traffic as noise.

Airespace and Aruba go a step further with an option called rogue containment – essentially, kicking rogue access points off the air so clients can’t see them. In tests with actual rogue access points, both vendors’ rogue containment routines worked within 1 second of the rogue attempting to come up.

Aruba’s switch offers a neat twist on rogue containment: It can distinguish between access points inside and outside a corporation. This is helpful if network managers want to disable rogues within a company but leave WLAN-enabled neighbors alone.

As for securing user traffic, all switches support the 802.1x specification for user authentication. We successfully authenticated WLAN clients through all four systems using Protected Extensible Authentication Protocol (PEAP). In all cases, the switches acted as authenticators, ferrying messages between the client and a RADIUS/PEAP server.

Support for 802.1x authentication carries another benefit by changing the keys used for Wired Equivalent Privacy (WEP) encryption at frequent intervals. That’s important because WEP design is inherently weak, and an attacker possessing a static key can decrypt traffic with relatively little effort. All vendors say they support emerging WEP replacement, Wi-Fi Protected Access with Temporal Key Integrity Protocol, but we did not verify this in testing.

All switches add various security measures above and beyond those furnished by the WLAN protocols, although the offerings are mixed.

Airespace’s and Aruba’s switches offer native IPSec capabilities to authenticate and encrypt traffic between clients and switches, and Airespace’s switch automatically sets up IPSec tunnels between switches.

Aruba’s switch is also a stateful firewall, a unique offering in this test lineup. All switches offer simple packet filtering using access control lists based on a variety of Layer 2, 3 and 4 criteria.

Aruba’s security offerings were the most compelling, from its own VPN client, to the stateful firewall on its switch, to its ability to allocate bandwidth on a per-user basis. Airespace also had a strong security story with its IPSec capabilities and support for SNMP Version 3. The Symbol and Trapeze switches offered good access controls, but lacked some of the more advanced features of the Aruba or Airespace devices.


Network World gratefully acknowledges the support of the vendors that supported this project:

QoSmetrix supplied not only its NetWarrior test system but also extensive engineering support and results analysis.

WildPackets supplied its AiroPeek NX analyzer and RFGrabber remote probe.

IBM supplied ThinkPad R40 notebooks for our client authentication tests.


Each switch offers something none of the others have. Airespace has the fastest and most tunable access points, and the simplest and most intuitive Web interface. Aruba offers the most comprehensive security story, with fine-grained controls at Layer 2 through Layer 7. Symbol also is strong on access control, and has the most awareness of power savings for handheld devices. Trapeze offers the slickest planning and deployment tool, a major selling point in winning over nontechnical management. Picking which system is best really depends on which criteria matter most to your company – security, performance, management or provisioning.