IPS performance tests show products must slow down for safety

Results indicate high performance doesn't always mean high security.

High-end intrusion-prevention systems move traffic at multigigabit rates and keep exploits out of the enterprise, but they might not do both at the same time. In lab tests of top-of-the-line IPS systems from six vendors, we encountered numerous trade-offs between performance and security.

Downsides of IPS coverage

Reviews of 6 products: Ambiron | Demarc | Fortinet | NFR | TippingPoint | Top Layer

How we tested IPS systems

Archive of Network World tests

Subscribe to the Network Product Test Results newsletter


Several devices we tested offered line-rate throughput and impressively low latency, but also leaked exploit traffic at these high rates. With other devices, we saw rates drop to zero as IPS systems struggled to fend off attacks.

In our initial round of testing, all IPS systems missed at least one variant of an exploit we expected they'd easily catch - one that causes vulnerable Cisco routers and switches to reboot. While most vendors plugged the hole by our second or third rounds of testing (and 3Com's TippingPoint 5000E spotted all but the most obscure version the first time out), we were surprised that so many vendors missed this simple, well-publicized and potentially devastating attack (see Can anyone stop this exploit?).

These issues make it difficult to pick a winner this time around (see link to NetResults graphic, below). If high performance is the most important criterion in choosing an IPS, the TippingPoint 5000E and Top Layer Networks' IPS 5500 are the clear leaders. They were the fastest boxes on the test bed, posting throughput and latency results more commonly seen in Ethernet switches than in IPS systems.

ProductipAngel-2500Sentarus Network Security SensorFortiGate-3600Sentivist Smart Sensor ES1000TippingPoint 5000EIPS 5500-1000
VendorAmbiron TrustWaveDemarc Threat Protection SolutionsFortinetNFR SecurityTippingPointTop Layer Networks
Price$100,000Sensor $37,000; Sentarus Threat Protection System management application starts at $25 per node.$30,000Sentivist Smart Sensor ES1000, $75,000; Sentivist Management Platform, $10,000.TippingPoint 5000E, $170,000; Security Management System, $10,000.$80,000.
ProsBlocked all exploits in final tests; innovative, vulnerability-based configuration system.Blocked all exploits in final tests; vendor contributes signatures to open source Snort community; fastest to develop missing Cisco SNMP signature; well-designed dashboard gives instant status.Blocked all exploits in final tests.Blocked all exploits in final tests; very fine-grained control over traffic detection and response.Fastest performer for good (non-exploit) traffic; choice of fail-open and fail-closed modes; outstanding management interface overall.Strong performer with one or two port-pairs; good anti-denial-of-service protection features; rate-based management tools are top of the pack.
Cons

Modest performance from beta hardware and drivers; initially missed Cisco SNMP exploit; weak forensics and alerting capabilites.

Relatively modest performer; searching for signatures is difficult; comprehensive forensics and analysis tools; weak IPS configuration, forensics and reporting.Lower port density than other products in this test; some software versions flooded exploit traffic (fixed in final version supplied by vendor); initially missed Cisco SNMP exploit; integration of IPS into UTM Firewall lacks features and manageability.Relatively modest performance; initially missed Cisco SNMP exploit; complexity of interface not for the casual user.Forwarded exploit traffic under heavy load; disables logging when overloaded.Forwarded some exploit traffic (possibly because of vendor misconfigura-tion); initially missed Cisco SNMP exploit; weak forensics capabilities.

One port-pair configurations

The breakdown

Top LayerAmbiron TrustWaveTippingPointFortinetDemarcNFR
Baseline forwarding rate 10%51.2552.553.75
Forwarding rate under attack 15%554.2543.251
Baseline latency 15%3.253.753.543.55
Latency under attack 15%553.253.51.51
Protection from attack 25%343444
Usability 20%3.52.84.122.73.9
TOTAL SCORE3.943.753.723.383.283.21

Two port-pair configurations

The breakdownTop LayerTippingPointAmbiron TrustWaveNFRDemarc
Baseline forwarding rate 10%55112
Forwarding rate under attack 15%43.75111
Baseline latency 15%2.7552.754.253.5
Latency under attack 15%52511.5
Protection from attack 25%33444
Usability 20%3.54.12.83.92.7
TOTAL SCORE3.713.682.972.822.64

Four port-pair configurations

The breakdownTippingPointAmbiron TrustWaveNFRDemarc
Baseline forwarding rate 10%2.5111
Forwarding rate under attack 15%2.751.511
Baseline latency 15%54.54.752.5
Latency under attack 15%3.25412.5
Protection from attack 25%3444
Usability 20%4.12.83.92.7
TOTAL SCORE3.473.162.892.54

Scoring Key: 5: Exceptional; 4: Very good; 3: Average; 2: Below average; 1: Subpar or not available

IPS usability is a mixed bag

The most important feature of an intrusion-prevention system is whether it does the job you bought it for. That said, it also needs to be usable, in the sense that it supports the network manager in the day-to-day tasks that go hand in hand with using an IPS in an enterprise setting. After shaking out the IPS products for performance, we took them back into the test lab to look at them from another angle entirely: usability.

The clear winner in terms of usability was 3Com TippingPoint's Security Management System used to drive the TippingPoint 5000E, a product that turned in above-average performance on every task we set. Honorable mentions go to NFR Security's Sentivist Management Platform used to control its Sentivist boxes and Top Layer Networks' IPS 5500, which are products anyone trying to manage an IPS would find meet their needs easily, with a minimum of wasted effort.

For a full discussion of this usability testing, see >>.

Of course, performance isn't the only criterion for these products. The 5000E leaked a small amount of exploit traffic, not only in initial tests but also in two subsequent retests. TippingPoint issued a patch for this behavior two weeks ago. The 5000E also disabled logging in some tests. That's not necessarily a bad thing (indeed, TippingPoint says customers prefer a no-logging option to a complete shutdown), but other devices in the same test kept logging at slower rates.

The IPS 5500 scored well in tests involving TCP traffic, but it too leaked small amounts of exploit traffic. Top Layer attributed this to its having misconfigured the firewall policy for this test.

IPS systems from Demarc and NFR Security use sensor hardware from the same third-party supplier, Bivio Networks. The relatively modest performance results from both IPS systems in some tests might be caused by configuration settings on the sensor hardware, something both vendors discovered only after we'd wrapped up testing. On the plus side, both IPS systems stopped all attacks in our final round of testing.

Ambiron TrustWave and Demarc built their ipAngel-2500 and Sentarus IPS software around the open source Snort engine. The performance differences between them can be attributed to software and driver decisions made by the respective vendors.

Fortinet's FortiGate-3600 posted decent results in baseline tests involving benign traffic only, but forwarding rates fell and response times rose as we ratcheted up attack rates.

We should note that this is a test of IPS performance, not security. This is a test of IPS performance, not security. We didn't measure how many different exploits an IPS can repel, or how well. And we're not implying that just because an IPS is fast, it's secure.

Even so, security issues kept cropping up. As noted, no device passed initial testing without missing at least one exploit, disabling logging and/or going into a "fail open" mode where all traffic (good and bad) gets forwarded.

This has serious implications for IPS systems on production networks. Retesting isn't possible in the real world; attackers don't make appointments. Also, we used a laughably small number of exploits - just three in all - and offered them at rates never exceeding 16% of each system's maximum packet-per-second capacity. That we saw security issues at all came as a surprise.

The three exploits are all well known: SQL Slammer, the Witty worm and a Cisco malformed SNMP vulnerability. We chose these three because they're all widely publicized, they've been around awhile, and they're based on User Datagram Protocol (UDP), allowing us detailed control over attack rates using the Spirent ThreatEx vulnerability assessment tool.

The IPS sensors we tested sit in line between other network devices, bridging and monitoring traffic between two or more Gigabit Ethernet ports. Given their inline placement, the ability to monitor traffic at high rates - even as fast as line rate - is critical. Accordingly, we designed our tests to determine throughput, latency and HTTP response time. We used TCP and UDP test traffic, and found significant differences in the ways IPS systems handle the two protocols (see How we tested IPS systems).

Vendors submitted IPS systems with varying port densities. FortiGate-3600 has a single pair of Gigabit Ethernet interfaces, while IPS 5500 has two pairs. The IPS systems from Ambiron TrustWave, Demarc, NFR and TippingPoint offer four port-pairs. To ensure apples-to-apples comparisons across all the products, we tested three times, using one, two and four pairs of ports where we could.

One port-pair

Our tests of single port-pairs are the only ones where all vendors were able to participate.

In baseline TCP performance tests (benign traffic only, no attacks), the Demarc, TippingPoint and Top Layer devices moved traffic at 959Mbps, near the maximum possible rate of around 965Mbps (see link to The IPS torture test, scenario 1, below). With 1,500 users simultaneously contending for bandwidth and TCP's built-in rate control ensuring fairness among users, this is about as close to line rate as it gets with TCP traffic.

The IPS torture test: scenario 1 Vendors submitted IPSs with varying port densities. To ensure apples-to-apples comparisons across all products, we tested three times, using one, two, and four pairs of ports where we could. If no results are listed for a vendor in a particular test scenario, that is because the vendor did not supply that configuration. Because TCP comprises 95% of the Internet's backbone traffic, we emphasized the effects of attacks on TCP traffic in our tests. However, we also conducted tests with pure User Datagram Protocol (UDP) traffic, because that protocol is used by VoIP, streaming media, instant messaging, and peer-to-peer applications. Footnotes in red indicate there was a security issue associated with that result. Footnotes in blue indicate there was a logging issue associated with that result.
Scenario No. 1: testing with one port pair across all vendors
Throughput (Mbps)Perfect deviceAmbiron TrustWaveDemarcFortinetNFRTippingPointTop Layer
TCP baseline965672959937382959959
TCP plus 1% attack965929924

928

358959959 [1]
TCP plus 4% attack965929799821308959 [2]954 [3]
TCP plus 16% attack965868216453158317 [4]911 [5]
UDP baseline, 64-byte frames1,524411441271,2231,235624
UDP baseline, 512-byte frames1,9253011,9251,0051,9251,9251,925
UDP baseline, 1518-byte frames1,9746281,9601,9741,9741,9741,974
Latency (millisec)Perfect deviceAmbiron TrustWaveDemarcFortinetNFRTippingPointTop Layer
TCP baselineN/A372.11430.50326.43144.05399.50447.02
TCP plus 1% attack trafficN/A262.50397.68326.68158.30398.05418.25 [1]
TCP plus 4% attack trafficN/A252.82409.051,272.95192.52393.16 [2]368.25 [3]
TCP plus 16% attack trafficN/A325.7015,607.592,865.3211,522.868,170.68 [4]375.61 [5]
UDP baselineN/A0.141.500.430.080.071.46
UDP plus 1% attack trafficN/A0.12259.1217.367.591.405.34 [6]
UDP plus 4% attack trafficN/A0.12404.654.316.8511.53 [7]8.43 [8]
UDP plus 16% attack trafficN/A0.15648.7112.966.4513.54 [9]5.55 [10]
Footnotes: [1] Forwarded 86 Witty exploits; [2] Forwarded 1 Cisco malformed SNMP exploit; [3] Forwarded 362 Witty exploits; [4] Forwarded 1 Cisco exploit, disabled logging for 10 minutes; [5] Forwarded 370 Witty exploits; [6] Forwarded 280 Witty exploits; [7] Disabled logging for 10 minutes; [8] Forwarded 322 Witty exploits, incorrectly labeled some exploits as SYN floods despite pure UDP load; [9] Disabled logging for 10 minutes; [10] Forwarded 159 Witty exploits, incorrectly labeled some exploits as SYN floods despite pure UDP load.

It was a very different story when we offered exploit traffic, with most systems slowing down sharply. The lone exception is ipAngel, which moved traffic at rates under heavy attack that were equal to or better than its rates in the baseline test. All others slowed substantially under heavy attack - and worse, some forwarded exploit traffic.

The IPS 5500 leaked a small amount of Witty worm traffic at all three attack rates we used - 1%, 4% and 16% of its TCP packet-per-second rate. The vendor blamed a misconfiguration of its firewall policy (vendors configured device security for this project). With its default firewall policy enabled, Top Layer says its device would have blocked exploits targeting any port not covered by the vendor's Witty signature.

The TippingPoint 5000E leaked a small amount of malformed Cisco SNMP traffic when it was offered at 4% and 16% of the device's maximum forwarding rate, even after we applied a second and third signature update.

Further, with attacks at the 16% rate, the TippingPoint device disabled all alerts (it continued to block exploits but didn't log anything) for 10 minutes. TippingPoint calls this a load-mitigation feature and says customers overwhelmingly prefer this setting to having the device shut down if it becomes overloaded.

We understand that device behavior during overload is ultimately a policy decision. For enterprises where high availability trumps security, the ability to continue forwarding packets is essential - even if it means a temporary shutdown of IPS monitoring. More-paranoid sites might block all traffic in response to an overload. In this test, the TippingPoint and NFR devices (and possibly others) explicitly give users a choice of behaviors, a desirable feature in our view.

In terms of HTTP response time, NFR's Sentivist Smart Sensor delivered Web pages the fastest, at an average of about 144msec for an 11KB object. This is the average time it took for each of 1,500 users to request and retrieve a Web page with a single 11KB object, with no attack traffic present. The NFR sensor also flew through the 1% and 4% attack tests, with response times lower than those for all other vendors' baseline measurements.

Something went horribly wrong for the Sentivist device in the 16% attack test, however, with response times registering nearly 80 times higher than in the baseline test. It could be simply an anomalous result; response time didn't increase nearly as much in the two and four port-pair tests on the Sentivist device. Further, the device's latency spiked only when hit with exploit traffic at more than 60Mbps, suggesting a serious and dedicated denial-of-service (DoS) attack was underway. After we concluded testing, NFR says it identified and corrected a CPU oversubscription issue, but we did not verify this.

Among other devices, ipAngel's response time degraded the least as we ratcheted up attack rates. This isn't too surprising, considering its sensor's powerful hardware the vendor supplied for testing. The ipAngel sensor had eight dual-core Opteron CPUs.

It's important to note that all results presented here are averages over the three-minute steady-state phase of our tests. These averages are valid, but they don't tell the whole story. As dramatic as the reduction in the average performance was in some tests, actual results over time show an even sharper drop in response to attacks (see link to TCP forwarding rates under attack, below).

TCP forwarding rates under attack over time

All IPS systems slowed traffic to some extent under our heaviest attack, but the degradation differed in terms of degree and duration. ipAngel's rates degraded the least, although the rate at the end of the test for this product was 824Mbps, more than 100Mbps lower than the system's 929Mbps rate at the beginning of the test. Top Layer's IPS 5500 did the best job of bouncing back to its original rate after an attack, but even so it momentarily slowed down traffic by more than 550Mbps, to less than 400Mbps. Whether users would notice this slowdown depends on the application. Something involving sustained high-speed data transfer (for example, FTP) would experience a brief slowdown.

The TippingPoint 5000E's rates dipped to 10Mbps under attack, down from around 400Mbps, and it's even worse for the others, with rates going down all the way to zero. The Demarc and NFR numbers suggest an overload, while the Fortinet device appears to recover, then falter again.

The sharp fall in TCP rates also has an effect on HTTP page-response time (see link graphic, below):

HTTP response time under attack

Response time - the interval between a client requesting and receiving a Web page - is only a few hundred milliseconds in baseline tests. Under our heaviest attack, however, many IPS systems introduced delays running well into the seconds. Ambiron TrustWave and Top Layer IPS systems did the best job of maintaining low and consistent response time under attack.

These results show that IPS devices have the potential to cause significant delays in network performance, way out of proportion to the amount of malicious traffic in the network. In effect, an IPS could be the instrument that delivers a self-inflicted DoS attack, where a small amount of attack traffic can make a gigabit network painfully slow for Web traffic and completely unusable for file and print service.

After testing concluded, Demarc said new performance parameters in the Bivio sensor hardware it uses would have dramatically improved its numbers. Unfortunately, time constraints prevented us from verifying that.

We also measured UDP throughput. We consider the UDP data less important than the TCP data, because UDP typically is a much smaller percentage of traffic on the Internet side of production networks, but these tests still are a useful way to describe the absolute limits of device forwarding and delay. If you plan to put the IPS deep in your network, UDP traffic from sources such as backups or storage servers could form the bulk of your traffic.

Most devices moved midsize and large UDP packets at or near the theoretical line rate. The two exceptions were FortiGate-3600, which moved midsize packets at about 50% of line rate, and ipAngel, which moved UDP traffic (for all packet lengths) at far lower rates than it moved TCP traffic. Ambiron TrustWave says its sensor used betas of interface device drivers and later versions show higher throughput and lower latency with UDP; we did not verify this.

As in the TCP tests, latency in the UDP testing also spiked sharply when we subjected most IPS systems to attack, with hundredfold (or more) increases in delay not uncommon. The only exception was ipAngel, which delayed packets by roughly the same amount in the attack tests as in the baseline test. This could be attributable to the ipAngel's UDP throughput, which is much lower than that of the other devices in this test.

We gave all vendors an opportunity to review and respond to test results before publication. TippingPoint found in internal testing that latency would have been far lower had we measured at 95%, not 100% of the throughput rate. Top Layer asked for a smaller reduction in load (perhaps to 99.9%) and attributed its increased UDP latency to clocking differences between our test tools and its IPS.

While lower loads probably would have produced lower delays, we respectfully disagree with both vendors' suggestions, on two grounds. First, as described in RFC 2544 - the industry standard for network device performance benchmarking - latency is measured at the throughput rate and not at X percent of the throughput rate, where X is some number that produces "good" latency.

Second, neither vendor's device bears a sticker warning customers that rates should never exceed X percent of line rate. If vendors want to claim high throughput, they also should measure latency at the throughput level.

Two port-pairs

In baseline TCP performance tests, the IPS 5500 was the fastest device, with the TippingPoint 5000E not far behind (see link to The IPS torture test, scenario 2, below).The Ambiron TrustWave, Demarc and NFR devices all moved TCP traffic at rates much further below the theoretical maximum than in the single port-pair tests.

The IPS torture test: scenario 2
Testing with two port pairs
Throughput (Mbps)Perfect deviceAmbiron TrustWaveDemarcNFRTippingPointTop Layer
TCP baseline1,9301,0131,4463821,8251,911
TCP plus 1% attack1,9309901,310

351

1,830 [11]1837 [12]
TCP plus 4% attack1,9309377823071,340 [13]1429 [14]
TCP plus 16% attack1,9307594982051,340 [15]1254 [16]
UDP baseline, 64-byte frames3,048822887121,226605
UDP baseline, 512-byte frames3,8506022,0092,0093,8503,850
UDP baseline, 1518-byte frames3,9481,1721,9771,9773,9483,948
Latency (millisec)Perfect deviceAmbiron TrustWaveDemarcNFRTippingPointTop Layer
TCP baselineN/A268.07158.03146.2671.70274.42
TCP plus 1% attack trafficN/A269.05169.73162.8986.11 [11]84.95 [12]
TCP plus 4% attack trafficN/A304.24365.64194.341,001.69 [13]179.64 [14]
TCP plus 16% attack trafficN/A460.6516,692.437,074.891,062.49 [15]1,338.22 [16]
UDP baselineN/A0.090.310.090.082.33
UDP plus 1% attack trafficN/A0.13202.300.124.16 [17]12.35 [18]
UDP plus 4% attack trafficN/A0.18391.800.108.95 [19]12.18 [20]
UDP plus 16% attack trafficN/A5.32566.800.646.63 [21]7.15 [22]
Footnotes: [11] Forwarded 9 Cisco malformed SNMP exploits; [12] Forwarded 174 Witty exploits; [13] Forwarded 13 Cisco exploits, disabled logging for 10 minutes; [14] Forwarded 524 Witty exploits; [15] Forwarded 57 Cisco exploits, disabled logging for 10 minutes; [16] Forwarded 1158 SQL Slammer, 1140 Witty, and 1138 Cisco exploits; [17] Disabled logging for 10 minutes; [18] Forwarded 199 Witty exploits, incorrectly labled some exploits as SYN floods despite pure UDP load; [19] Disabled logging for 10 minutes; [20] Forwarded 139 Witty exploits, incorrectly labled some exploits as SYN floods despite pure UDP load; [21] Disabled logging for 10 minutes; [22] Forwarded 33 Witty exploits, incorrectly labled some exploits as SYN floods despite pure UDP load.

The Top Layer and TippingPoint devices also produced the highest rates in the attack tests, but results were problematic. The TippingPoint 5000E forwarded a small amount of Cisco exploit traffic in all three of our attack tests, and disabled logging in our 4% and 16% attack tests. The Top Layer device forwarded small amounts of Witty worm traffic in all three attack tests. The issues for both vendors were the same as in the single port-pair tests: TippingPoint had a problem with the Cisco signature, and Top Layer had a problem with its firewall configuration.

The Sentarus sensor and ipAngel were the fastest IPS systems among devices that did not forward any exploit traffic. The Sentarus came out on top when we offered attacks at 1% of the TCP rate, moving traffic at close to the baseline speed. The ipAngel was quickest in the 4% and 16% attack tests, though rates were about 10% and 25% lower, respectively, than in the baseline test.

HTTP response times also shot up dramatically under attack, though in some cases the delays were lower with two port-pairs than with one. This could be attributed to device architecture, in which IPS sensors use dedicated CPUs and/or network processors for each port-pair.

In the UDP tests, the TippingPoint and Top Layer IPS systems were again the fastest, moving midsize and large frames at line rate. The Demarc and NFR devices were about half that fast: Both posted identical numbers, possibly because both use the same Bivio sensor hardware.

UDP latency was higher under attack than in the baseline tests, especially for Sentarus in the 16% attack test. However, excluding that one result, latency generally rose less with two port-pairs under attack than with one - again, possibly caused by distributed processing designs.

Four port-pairs

With four pairs of Gigabit Ethernet interfaces (thus, rates theoretically capable of rising as high as 8Gbps), this was the acid test for IPS performance.

The TippingPoint 5000E was hands-down the fastest IPS in our TCP baseline tests (see link to The IPS torture test, scenario 3, below). It moved a mix of applications at 3.434Gbps, not far from the test bed's theoretical top rate of 3.8Gbps, and about twice as fast as the next quickest sensor, ipAngel.

The IPS torture test: scenario 3
Testing with four port pairs
Throughput (Mbps)Perfect deviceAmbiron TrustWaveDemarcNFRTippingPoint
TCP baseline3,8601,7301,5143823,434
TCP plus 1% attack3,8601,6921,268

351

3,402 [23]
TCP plus 4% attack3,8601,5386943072,317 [24]
TCP plus 16% attack3,8601,3173502051,875 [25]
UDP baseline, 64-byte frames6,0951305417121,210
UDP baseline, 512-byte frames7,6991,2032,5562,0094,018
UDP baseline, 1518-byte frames7,8962,4002,8991,9774,454
Latency (millisec)Perfect deviceAmbiron TrustWaveDemarcNFRTippingPoint
TCP baselineN/A160.91166.27146.26112.25
TCP plus 1% attack trafficN/A167.80237.12162.89110.72 [23]
TCP plus 4% attack trafficN/A194.55630.03194.34627.8[24]
TCP plus 16% attack trafficN/A636.1815,285.897,074.89491.99 [25]
UDP baselineN/A1.24343.630.110.04
UDP plus 1% attack trafficN/A7.13205.780.1028.02 [26]
UDP plus 4% attack trafficN/A8.64388.810.119.93 [27]
UDP plus 16% attack trafficN/A16.61566.740.115.85 [28]
Footnotes:: [23] Forwarded 1280 Cisco malformed SNMP exploits; [24] Forwarded 1128 Cisco exploits; [25] Forwarded 795 Cisco exploits, disabled logging for 10 minutes; [26] Disabled logging for 10 minutes; [27] Disabled logging for 10 minutes; [28] Disabled logging for 10 minutes.

In our attack tests, the TippingPoint 5000E again leaked small amounts of Cisco exploit traffic and also disabled logging in the 16% attack test.

Of devices with no security issues, ipAngel was fastest. As in tests with two port-pairs, ipAngel's TCP forwarding rates degraded as we ratcheted up attack rates, but on the other hand it did not leak any exploit traffic.

Most of the devices increased HTTP response time under attack, especially in the 16% attack test. In the worst case, response time through Sentarus spiked from 166msec in the baseline test to more than 15 seconds in the 16% attack test. That may have been attributable to a tuning parameter in the Bivio sensor, according to Demarc. Unfortunately, we learned about this parameter only after testing concluded.

TippingPoint's IPS was also the fastest in our UDP tests. In baseline tests it moved large packets at 4.454Gbps, the fastest single rate in our tests. It was also the top performer in baseline tests of short and medium-length packets.

Latency skyrocketed for multiple devices once we combined benign and attack UDP traffic. For example, the TippingPoint 5000E delayed benign UDP traffic by nearly 30 seconds in a test with attacks at 1% of its capacity, and the device also disabled logging in all three of our attack tests. The other products also slowed traffic by huge margins over the baseline test. The IPS with the best UDP latency under attack was Sentivist, not just with four port-pairs but indeed in all tests.

If the test results say anything, it's that performance and security are two very different goals, and - at least with these devices - the goals often may not bear any sensible relationship to one another.

These tests turned up two different kinds of IPS systems: devices that move traffic at very high rates, and devices that block attacks but aren't the speediest performers. Picking the right IPS comes down to finding the right balance between security and performance.

Newman is president of Network Test, an independent engineering services firm in Westlake Village, Calif. He can be reached at dnewman@networktest.com.

NW Lab Alliance

Newman and Joel Snyder are members of the Network World Lab Alliance, a cooperative of the premier reviewers in the network industry, each bringing to bear years of practical experience on every review. For more Lab Alliance information, including what it takes to become a member, go to www.networkworld.com/alliance.

Thanks to all

Network World gratefully acknowledges the vendors that supported this project. Spirent Communications supplied its Spirent ThreatEx, Avalanche, Reflector, SmartBits and AX/4000 test tools, and engineer Chuck McAuley assisted with ThreatEx configuration. Apcon supplied an Intellapatch virtual patch panel that tied together the test bed. And Red Hat supplied its Red Hat Enterprise Linux operating system, used on test-bed management servers.


Next: Full results of usability testing >

Learn more about this topic

IPS Buyer's Guide

Three IPS products pass security evaluation tests

06/26/06

Review of Arxceo's Ally IP IPS shows strong features, reduced manageability 03/06/06

Review

IPS: 6 hot technologies for 2006

01/09/06

Join the discussion
Be the first to comment on this article. Our Commenting Policies