AirMagnet Distributed Version 4.0

A great way to monitor your WLAN.

Our favorite wireless LAN analyzer from last year (see "WLAN analyzers") now has a distributed version that uses a combination of proprietary access points and notebook-based sensors to help assess an 802.11 ab or g area. Released last month, we recently tested Version 4.0 of AirMagnet Distributed, which seems to have solved some of the access point problems we found in an earlier version.

The product has an outstanding GUI and covers a breadth and depth of 802.11-specific problem areas for maintaining a dispersed WLAN. A tedious sensor-rollout method, a lack of an integral reporting mechanism and some other rough edges concern us, but overall this is a very good product.


How we did it

Archive of Network World reviews

Subscribe to the Product Review newsletter


About the system

AirMagnet Distributed includes four components: a management server that includes its own HTTP server (AirMagnet recommends dedicating a machine to it); a sensor (looks like an access point); the Distributed Console (a Windows-only application that organizes information from the AirMagnet Server application); and the reporting system.

Although similar to the Newbury Networks' Watchdog system (see "Newbury Networks' WiFi Watchdog"), the AirMagnet Distributed system does not triangulate wireless equipment. Rather, distributed access point sensors are deployed across the network, and can be delineated by floor, building and campus to articulate the physical location of errors or problems.

The system did a fine job of giving us wireless information, with only a few minor problems. Like the other AirMagnet products, the distributed system is a wireless-only analysis product; it won't cover wireline problems without assistance, such as wired protocol analyzers or intrusion-detection system applications.

Initial configuration of each sensor was necessary, one at a time, because each one comes from the factory with the same IP address. Each sensor covers all three 802.11 radio modes (a, b and g). It was easier to use the serial interface on the sensor to update addresses instead of configuring through the Web interface. We used four sensors in our tests, which let us cover 4,000 square feet on a one-floor building. The same sensors also were tested in a five-story building with the same coverage area (4,000 square feet).

The optional reporting software runs on a Microsoft SQL Server (a runtime license can be obtained if needed), and organizes the huge amount of data that the sensors can generate.

Listen to the air

The sensors have to find the Distributed Network Management Server through a private network or Internet VPN (anything through a direct route). Once configured, each sensor gets a software update from the management server if needed. Even on a wireless network filled with problems, the amount of data sent to the management server remains low, about a few thousand bytes per minute, per sensor.

Monitoring produces data in two categories: security and performance. The default settings indicate a "worry about everything" attitude, which we liked as a baseline.

We brought up the sensors in a local and VPN-emulated environment (we simulated a remote building scenario, see How we did it). Alerts can be sent by e-mail, Short Message Service, telephone and Internet pages, sounds and instant messaging. We tested all the alerts except instant messaging.

The default settings produced an immediate deluge of information and alarms - even if a network is correctly configured for its feature set. Some of the information is trivial, such as the detection of an 802.11g access point that does not support smooth 802.11b-to-g transition. Many older access points don't do this, and even firmware updates won't help. It's possible to remove the detection of items such as this, so your logs don't fill up with essentially useless information.

The challenge with the system then is to find baselines and "normal" settings for a monitored network. Fortunately, the management console GUI is divided into a monitoring GUI and a policy/management GUI that gives highly articulate, though occasionally ambiguous, settings information about each possible monitoring attribute and condition. Understanding the settings requires in-depth knowledge of how 802.11-based network function. The ambiguity arises as some settings don't have good default values, because networks are so different.

For example, it is a good idea to watch for access points that go offline. It means there is a possibility that an area is not served, because an access point unavailable, it is rebooting, or it was nefariously substituted. There are many reasons that an access point goes offline, from power problems to people or objects interfering with the sensor's ability to detect a signal. For this reason, sensors need to be placed where they are unlikely to be blocked, to reduce false positives. This requires some fine tuning and periodic adjustment.

Security

The system can find many security problems. Our testing verified problems such as broadcasting an Service Set Identifier (SSID), the lack of Wired Equivalent Privacy, rogue access points (in 802.11a, b and g), ad hoc association attempts, session hijacking attempts, open authentication attempts and VPN verification (Point-to-Point Tunneling Protocol, Secure Shell and IPSecLayer 2 Tunneling Protocol is supported but we used IPSec over L2TP and L2TP was undetected).

We also verified man-in-the-middle de-tection, six brands of access points for default configurations (D-Link Systems, Linksys, Netgear, Proxim, 3Com and Buffalo Technology), and an off-hour activity check. The off-hour check defaults are not monitored by time of day, but rather by SSID for local WLANs, neighboring WLANs and guest WLANs. We consider this a weak feature. Fortress encryption detection and monitoring is supported, but we chose not to test this.

The system also can detect 802.1X (authentication that uses RADIUS). We configured a Linux machine with Lightweight Directory Access Protocol and RADIUS, and the Temporal Key Integrity Protocol (TKIP) as used in the Wi-Fi Protected Access specification. The authentication server, running through a 3Com and Linksys access point, authenticated clients correctly. We configured the keys, which should change periodically, to never change - thus defeating TKIP. AirMagnet could not detect this, which is ostensibly monitored in a measured field called "802.1x rekey timeout too long."

Other attacks, such as a denial-of-service attack, including association and authentication floods, all were detected correctly.

Performance

The system also could detect deployment/operations errors, 802.11a/b/g errors and inter-protocol usage errors between 802.11b and g, radio frequency management problems and "problematic traffic patterns." The system's frequency calibration was a bit off, which we verified with an oscilloscope and external time-base trigger. The system sometimes reports off-channel errors that aren't accurate, but the missed channel information was always close.

The system also found hidden stations - clients that can't hear other nodes and therefore collide with them by broadcasting over them. We used shielding to partition stations electrically and found that if the sensors could find them, they could determine whether the stations were colliding frequently (because they were therefore hidden from other stations' signals). The cure for this was to either move the access point that the node should associate with, or re-orient the client so it could detect other signals. This problem often happens when a node/machine sits on a desk near a steel filing cabinet or other wireless obstruction.

The system occasionally found high noise on a channel when a sensor was in close physical proximity to an access point. The sensors should be kept at least 9 feet from any client or access point, or false positives could be triggered. We made several adjustments to this threshold.

Documentation is relegated to a thin user's guide, and replaced by extensive and usually articulate on-screen help and prompts. In the management policy settings area, a wizard was helpful and somewhat complete, although it required a good base knowledge of WLANs.

Reporter

We were disappointed by the lack of an integrated report generator. While query-based, printed reports through the use of the Management Console are available as a pricey option (the Reporter app), and it is possible to use PrtScrn to dump reports to a printer without Reporter (as well as export lots of data), we would have preferred an integrated report generator. When added, Reporter uses SQL Server, which adds administrative overhead to the usage process. On the plus side, Reporter installation after a SQL Server install was simple.

AirMagnet Distributed

Version 4.0
OVERALL RATING
4.2
Company: AirMagnet Cost: Starter kit includes four sensors, Management Server, Console: $7,995; additional sensors, $750 each; Reporter application, priced by number of sensors — up to 20 sensors, $2,595; up to 50 sensors, $4,995. Pros: Compre-hensive; WLAN specific; very tunable. Cons: Reporter application is optional; a few small glitches.
The breakdown    

Monitoring/analysis 40% 

4.5
Performance 30%  4

Installation/administration 20% 

4
Documentation 10%  4
TOTAL SCORE  4.2
Scoring Key: 5: Exceptional; 4: Very good; 3: Average; 2: Below average; 1: Consistently subpar

Another upside is that the reports are beautiful, simple to put together, and contain easy-to-understand information for the technically inclined, and companies that require an audit trail. Without the Reporter system, AirMagnet Distributed is a lesser product.

Bottom line

AirMagnet Distributed excels in its GUI and deep knowledge of 802.11-specific problems it can solve, and an overall ease of use to maintain a disperse WLAN. We liked its nervousness on the default settings, despite some inevitable fine-tuning of the alerts.

It does take a bit of work to deploy a fleet of sensors - both the initial configuration and deployment. Our biggest concern remains a lack of an integrated reporting system.

Learn more about this topic

Henderson is managing director and principal researcher for ExtremeLabs. He can be reached at thenderson@extremelabs.com.

NW Lab Alliance

Henderson is also a member of the Network World Lab Alliance, a cooperative of the premier reviewers in the network industry, each bringing to bear years of practical experience on every review. For more Lab Alliance information, including what it takes to become a member, go to www.nwfusion.com/alliance.

Editors' Picks
Join the discussion
Be the first to comment on this article. Our Commenting Policies