• United States
by Edwin Mier

How we did it

Dec 09, 20024 mins

How we compared the various voice over IP products.

Product selection

Our objective was to review the growing set of products that have added “VoIP traffic analysis” to their repertoire of options. Nearly two dozen contenders turned up in our research; many are long-time purveyors of network-monitoring and protocol-analyzer wares; some were start-ups.

We devised a methodology to evaluate these products in seven functional categories:

•  Real-time VoIP traffic monitoring, including alarm generation.

•  Long-term VoIP activity reporting, including report generation.

•  VoIP traffic generation which, in corporations, we consider useful for verifying proper call-controller operation, as well as predeployment VoIP-bandwidth assessment.

•  Automated VoIP voice-quality assessment.

•  Measurement/monitoring of VoIP-related QoS parameters.

•  VoIP traffic and protocol decode.

•  Intelligent diagnosis of VoIP service problems.

We contacted and surveyed the vendors to see whose VoIP-traffic-analysis wares addressed these functional categories. Those that seemed to be good fits were invited to submit their product packages for evaluation.

Vendors could submit multiple products – whether separately licensed software modules or wholly discrete hardware/software platforms – to address as many of these functional categories as they could. Our thinking was that, as companies enhance their network operating centers for VoIP traffic analysis, having to buy two or three software modules or analyzer devices to get the whole job done would in most cases be a necessary and acceptable expense.

A half-dozen vendors sent or brought their VoIP-traffic-analysis wares in for evaluation. All testing was done at MierLabs’ main facility in Hightstown, N.J. (

Many of the vendors said their products addressed just one or two of the functional areas. Others said their VoIP-analysis wares were not quite ready for a public review, or else were between major releases. We asked the vendors to provide us with some additional details on their products (Click here for more)

To be included in the evaluation the VoIP-analysis package had to support multiple vendors’ VoIP environments, as well as open standards – such as H.323 and Session Initiation Protocol (SIP ). We excluded the inherent VoIP management capabilities that all vendors of IP-telephony equipment (Cisco, 3Com, Avaya and so on) offer, because these packages typically address just that vendor’s specific equipment – and often proprietary protocol environment.

The Test

The network environment we created to evaluate VoIP-traffic-analysis products (See Test bed diagram) was a challenge – not just to build, but to keep running consistently during each of the product evaluations. On top of the IP infrastructure – driven by Extreme Networks’ Summit 48i Layer 2/Layer 3 switches – we ran varying loads of real VoIP traffic, while at the same time applying varying levels of network impairments. We found that VoIP streams and call-control protocols react differently to varying impairment conditions.

PacketStorm Communications’ Hurricane IP Network Emulator handled the impairments. We defined various profiles of different impairment conditions which, once defined to the PacketStorm system, could be readily applied and then deactivated via a mouse click. The profiles ranged from a very clean “High-speed Campus” environment, with minimal impairments, to one we termed “Internet on a Bad Day,” featuring 150 msec of one-way latency, 100 msec of jitter and 5% randomly applied packet loss.

A Hammer LoadBlaster 500, from Empirix generated the bulk of our calls. As the diagram shows, we set up the Hammer, via various automated scripts, to deliver carefully timed calls in two directions through our IP LAN/WAN. A T-1 full of calls was processed in one direction using SIP-based VoIP gear, while another T-1 of call load was set-up in the opposite direction using H.323-based VoIP gear. We did this so that all VoIP traffic and call set-up activity could be observed from various access points.

As a leading IP-telephony vendor and H.323 proponent, we invited Avaya to provide an H.323-based VoIP environment. Running the vendor’s MultiVantage control software, we set-up a pair of redundant S8700 servers, which handled all H.323 call control, and two Avaya G600 gateways, which linked via T-1s to the Hammer call generator.

A multivendor SIP environment was driven by Tangerine’s softswitch called Tangerine Connect, which specializes in enterprise-class SIP call controllers. As a testament to the openness of SIP, we successfully employed two vendors’ SIP gateways to handle calls and work with the Tangerine proxy server. They were: the Vega 100 gateway, from VegaStream and the Mediant 2000 gateway, from AudioCodes.

We also ran laptop-to-laptop VoIP calls, using Microsoft’s XP-based SIP stack and Messenger application, and Avaya’s softphone, which in our environment also employed H.323 call control.