IPS performance tests show products must slow down for safety

Results indicate high performance doesn't always mean high security.

High-end intrusion-prevention systems move traffic at multigigabit rates and keep exploits out of the enterprise, but they might not do both at the same time. In lab tests of top-of-the-line IPS systems from six vendors, we encountered numerous trade-offs between performance and security.

Downsides of IPS coverage

Reviews of 6 products: Ambiron | Demarc | Fortinet | NFR | TippingPoint | Top Layer

How we tested IPS systems

Archive of Network World tests

Subscribe to the Network Product Test Results newsletter


Several devices we tested offered line-rate throughput and impressively low latency, but also leaked exploit traffic at these high rates. With other devices, we saw rates drop to zero as IPS systems struggled to fend off attacks.

In our initial round of testing, all IPS systems missed at least one variant of an exploit we expected they'd easily catch - one that causes vulnerable Cisco routers and switches to reboot. While most vendors plugged the hole by our second or third rounds of testing (and 3Com's TippingPoint 5000E spotted all but the most obscure version the first time out), we were surprised that so many vendors missed this simple, well-publicized and potentially devastating attack (see Can anyone stop this exploit?).

These issues make it difficult to pick a winner this time around (see link to NetResults graphic, below). If high performance is the most important criterion in choosing an IPS, the TippingPoint 5000E and Top Layer Networks' IPS 5500 are the clear leaders. They were the fastest boxes on the test bed, posting throughput and latency results more commonly seen in Ethernet switches than in IPS systems.

ProductipAngel-2500Sentarus Network Security SensorFortiGate-3600Sentivist Smart Sensor ES1000TippingPoint 5000EIPS 5500-1000
VendorAmbiron TrustWaveDemarc Threat Protection SolutionsFortinetNFR SecurityTippingPointTop Layer Networks
Price$100,000Sensor $37,000; Sentarus Threat Protection System management application starts at $25 per node.$30,000Sentivist Smart Sensor ES1000, $75,000; Sentivist Management Platform, $10,000.TippingPoint 5000E, $170,000; Security Management System, $10,000.$80,000.
ProsBlocked all exploits in final tests; innovative, vulnerability-based configuration system.Blocked all exploits in final tests; vendor contributes signatures to open source Snort community; fastest to develop missing Cisco SNMP signature; well-designed dashboard gives instant status.Blocked all exploits in final tests.Blocked all exploits in final tests; very fine-grained control over traffic detection and response.Fastest performer for good (non-exploit) traffic; choice of fail-open and fail-closed modes; outstanding management interface overall.Strong performer with one or two port-pairs; good anti-denial-of-service protection features; rate-based management tools are top of the pack.
Cons

Modest performance from beta hardware and drivers; initially missed Cisco SNMP exploit; weak forensics and alerting capabilites.

Relatively modest performer; searching for signatures is difficult; comprehensive forensics and analysis tools; weak IPS configuration, forensics and reporting.Lower port density than other products in this test; some software versions flooded exploit traffic (fixed in final version supplied by vendor); initially missed Cisco SNMP exploit; integration of IPS into UTM Firewall lacks features and manageability.Relatively modest performance; initially missed Cisco SNMP exploit; complexity of interface not for the casual user.Forwarded exploit traffic under heavy load; disables logging when overloaded.Forwarded some exploit traffic (possibly because of vendor misconfigura-tion); initially missed Cisco SNMP exploit; weak forensics capabilities.

One port-pair configurations

The breakdown

Top LayerAmbiron TrustWaveTippingPointFortinetDemarcNFR
Baseline forwarding rate 10%51.2552.553.75
Forwarding rate under attack 15%554.2543.251
Baseline latency 15%3.253.753.543.55
Latency under attack 15%553.253.51.51
Protection from attack 25%343444
Usability 20%3.52.84.122.73.9
TOTAL SCORE3.943.753.723.383.283.21

Two port-pair configurations

The breakdownTop LayerTippingPointAmbiron TrustWaveNFRDemarc
Baseline forwarding rate 10%55112
Forwarding rate under attack 15%43.75111
Baseline latency 15%2.7552.754.253.5
Latency under attack 15%52511.5
Protection from attack 25%33444
Usability 20%3.54.12.83.92.7
TOTAL SCORE3.713.682.972.822.64

Four port-pair configurations

The breakdownTippingPointAmbiron TrustWaveNFRDemarc
Baseline forwarding rate 10%2.5111
Forwarding rate under attack 15%2.751.511
Baseline latency 15%54.54.752.5
Latency under attack 15%3.25412.5
Protection from attack 25%3444
Usability 20%4.12.83.92.7
TOTAL SCORE3.473.162.892.54

Scoring Key: 5: Exceptional; 4: Very good; 3: Average; 2: Below average; 1: Subpar or not available

IPS usability is a mixed bag

The most important feature of an intrusion-prevention system is whether it does the job you bought it for. That said, it also needs to be usable, in the sense that it supports the network manager in the day-to-day tasks that go hand in hand with using an IPS in an enterprise setting. After shaking out the IPS products for performance, we took them back into the test lab to look at them from another angle entirely: usability.

The clear winner in terms of usability was 3Com TippingPoint's Security Management System used to drive the TippingPoint 5000E, a product that turned in above-average performance on every task we set. Honorable mentions go to NFR Security's Sentivist Management Platform used to control its Sentivist boxes and Top Layer Networks' IPS 5500, which are products anyone trying to manage an IPS would find meet their needs easily, with a minimum of wasted effort.

For a full discussion of this usability testing, see >>.

Of course, performance isn't the only criterion for these products. The 5000E leaked a small amount of exploit traffic, not only in initial tests but also in two subsequent retests. TippingPoint issued a patch for this behavior two weeks ago. The 5000E also disabled logging in some tests. That's not necessarily a bad thing (indeed, TippingPoint says customers prefer a no-logging option to a complete shutdown), but other devices in the same test kept logging at slower rates.

The IPS 5500 scored well in tests involving TCP traffic, but it too leaked small amounts of exploit traffic. Top Layer attributed this to its having misconfigured the firewall policy for this test.

IPS systems from Demarc and NFR Security use sensor hardware from the same third-party supplier, Bivio Networks. The relatively modest performance results from both IPS systems in some tests might be caused by configuration settings on the sensor hardware, something both vendors discovered only after we'd wrapped up testing. On the plus side, both IPS systems stopped all attacks in our final round of testing.

Ambiron TrustWave and Demarc built their ipAngel-2500 and Sentarus IPS software around the open source Snort engine. The performance differences between them can be attributed to software and driver decisions made by the respective vendors.

Fortinet's FortiGate-3600 posted decent results in baseline tests involving benign traffic only, but forwarding rates fell and response times rose as we ratcheted up attack rates.

We should note that this is a test of IPS performance, not security. This is a test of IPS performance, not security. We didn't measure how many different exploits an IPS can repel, or how well. And we're not implying that just because an IPS is fast, it's secure.

Even so, security issues kept cropping up. As noted, no device passed initial testing without missing at least one exploit, disabling logging and/or going into a "fail open" mode where all traffic (good and bad) gets forwarded.

This has serious implications for IPS systems on production networks. Retesting isn't possible in the real world; attackers don't make appointments. Also, we used a laughably small number of exploits - just three in all - and offered them at rates never exceeding 16% of each system's maximum packet-per-second capacity. That we saw security issues at all came as a surprise.

The three exploits are all well known: SQL Slammer, the Witty worm and a Cisco malformed SNMP vulnerability. We chose these three because they're all widely publicized, they've been around awhile, and they're based on User Datagram Protocol (), allowing us detailed control over attack rates using the Spirent ThreatEx vulnerability assessment tool.

The IPS sensors we tested sit in line between other network devices, bridging and monitoring traffic between two or more Gigabit Ethernet ports. Given their inline placement, the ability to monitor traffic at high rates - even as fast as line rate - is critical. Accordingly, we designed our tests to determine throughput, latency and HTTP response time. We used TCP and UDP test traffic, and found significant differences in the ways IPS systems handle the two protocols (see How we tested IPS systems).

Vendors submitted IPS systems with varying port densities. FortiGate-3600 has a single pair of Gigabit Ethernet interfaces, while IPS 5500 has two pairs. The IPS systems from Ambiron TrustWave, Demarc, NFR and TippingPoint offer four port-pairs. To ensure apples-to-apples comparisons across all the products, we tested three times, using one, two and four pairs of ports where we could.

One port-pair

Our tests of single port-pairs are the only ones where all vendors were able to participate.

In baseline TCP performance tests (benign traffic only, no attacks), the Demarc, TippingPoint and Top Layer devices moved traffic at 959Mbps, near the maximum possible rate of around 965Mbps (see link to The IPS torture test, scenario 1, below). With 1,500 users simultaneously contending for bandwidth and TCP's built-in rate control ensuring fairness among users, this is about as close to line rate as it gets with TCP traffic.

The IPS torture test: scenario 1 Vendors submitted IPSs with varying port densities. To ensure apples-to-apples comparisons across all products, we tested three times, using one, two, and four pairs of ports where we could. If no results are listed for a vendor in a particular test scenario, that is because the vendor did not supply that configuration. Because TCP comprises 95% of the Internet's backbone traffic, we emphasized the effects of attacks on TCP traffic in our tests. However, we also conducted tests with pure User Datagram Protocol (UDP) traffic, because that protocol is used by VoIP, streaming media, instant messaging, and peer-to-peer applications. Footnotes in red indicate there was a security issue associated with that result. Footnotes in blue indicate there was a logging issue associated with that result.
Scenario No. 1: testing with one port pair across all vendors
Throughput (Mbps)Perfect deviceAmbiron TrustWaveDemarcFortinetNFRTippingPointTop Layer
TCP baseline965672959937382959959
TCP plus 1% attack965929924

928

358959959 [1]
TCP plus 4% attack965929799821308959 [2]954 [3]
TCP plus 16% attack965868216453158317 [4]911 [5]
UDP baseline, 64-byte frames1,524411441271,2231,235624
UDP baseline, 512-byte frames1,9253011,9251,0051,9251,9251,925
UDP baseline, 1518-byte frames1,9746281,9601,9741,9741,9741,974
Latency (millisec)Perfect deviceAmbiron TrustWaveDemarcFortinetNFRTippingPointTop Layer
TCP baselineN/A372.11430.50326.43144.05399.50447.02
TCP plus 1% attack trafficN/A262.50397.68326.68158.30398.05418.25 [1]
TCP plus 4% attack trafficN/A252.82409.051,272.95192.52393.16 [2]368.25 [3]
TCP plus 16% attack trafficN/A325.7015,607.592,865.3211,522.868,170.68 [4]375.61 [5]
UDP baselineN/A0.141.500.430.080.071.46
UDP plus 1% attack trafficN/A0.12259.1217.367.591.405.34 [6]
UDP plus 4% attack trafficN/A0.12404.654.316.8511.53 [7]8.43 [8]
UDP plus 16% attack trafficN/A0.15648.7112.966.4513.54 [9]5.55 [10]
Footnotes: [1] Forwarded 86 Witty exploits; [2] Forwarded 1 Cisco malformed SNMP exploit; [3] Forwarded 362 Witty exploits; [4] Forwarded 1 Cisco exploit, disabled logging for 10 minutes; [5] Forwarded 370 Witty exploits; [6] Forwarded 280 Witty exploits; [7] Disabled logging for 10 minutes; [8] Forwarded 322 Witty exploits, incorrectly labeled some exploits as SYN floods despite pure UDP load; [9] Disabled logging for 10 minutes; [10] Forwarded 159 Witty exploits, incorrectly labeled some exploits as SYN floods despite pure UDP load.

It was a very different story when we offered exploit traffic, with most systems slowing down sharply. The lone exception is ipAngel, which moved traffic at rates under heavy attack that were equal to or better than its rates in the baseline test. All others slowed substantially under heavy attack - and worse, some forwarded exploit traffic.

The IPS 5500 leaked a small amount of Witty worm traffic at all three attack rates we used - 1%, 4% and 16% of its TCP packet-per-second rate. The vendor blamed a misconfiguration of its firewall policy (vendors configured device security for this project). With its default firewall policy enabled, Top Layer says its device would have blocked exploits targeting any port not covered by the vendor's Witty signature.

The TippingPoint 5000E leaked a small amount of malformed Cisco SNMP traffic when it was offered at 4% and 16% of the device's maximum forwarding rate, even after we applied a second and third signature update.

1 2 3 Page 1
Page 1 of 3
The 10 most powerful companies in enterprise networking 2022