IPS performance tests show products must slow down for safety
Results indicate high performance doesn't always mean high security.
High-end intrusion-prevention systems move traffic at multigigabit rates and keep exploits out of the enterprise, but they might not do both at the same time. In lab tests of top-of-the-line IPS systems from six vendors, we encountered numerous trade-offs between performance and security.
Reviews of 6 products: Ambiron | Demarc | Fortinet | NFR | TippingPoint | Top Layer
Archive of Network World tests
Subscribe to the Network Product Test Results newsletter
Several devices we tested offered line-rate throughput and impressively low latency, but also leaked exploit traffic at these high rates. With other devices, we saw rates drop to zero as IPS systems struggled to fend off attacks.
In our initial round of testing, all IPS systems missed at least one variant of an exploit we expected they'd easily catch - one that causes vulnerable Cisco routers and switches to reboot. While most vendors plugged the hole by our second or third rounds of testing (and 3Com's TippingPoint 5000E spotted all but the most obscure version the first time out), we were surprised that so many vendors missed this simple, well-publicized and potentially devastating attack (see Can anyone stop this exploit?).
These issues make it difficult to pick a winner this time around (see link to NetResults graphic, below). If high performance is the most important criterion in choosing an IPS, the TippingPoint 5000E and Top Layer Networks' IPS 5500 are the clear leaders. They were the fastest boxes on the test bed, posting throughput and latency results more commonly seen in Ethernet switches than in IPS systems.
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
IPS usability is a mixed bag
The most important feature of an intrusion-prevention system is whether it does the job you bought it for. That said, it also needs to be usable, in the sense that it supports the network manager in the day-to-day tasks that go hand in hand with using an IPS in an enterprise setting. After shaking out the IPS products for performance, we took them back into the test lab to look at them from another angle entirely: usability.
The clear winner in terms of usability was 3Com TippingPoint's Security Management System used to drive the TippingPoint 5000E, a product that turned in above-average performance on every task we set. Honorable mentions go to NFR Security's Sentivist Management Platform used to control its Sentivist boxes and Top Layer Networks' IPS 5500, which are products anyone trying to manage an IPS would find meet their needs easily, with a minimum of wasted effort.
For a full discussion of this usability testing, see >>.
Of course, performance isn't the only criterion for these products. The 5000E leaked a small amount of exploit traffic, not only in initial tests but also in two subsequent retests. TippingPoint issued a patch for this behavior two weeks ago. The 5000E also disabled logging in some tests. That's not necessarily a bad thing (indeed, TippingPoint says customers prefer a no-logging option to a complete shutdown), but other devices in the same test kept logging at slower rates.
The IPS 5500 scored well in tests involving TCP traffic, but it too leaked small amounts of exploit traffic. Top Layer attributed this to its having misconfigured the firewall policy for this test.
IPS systems from Demarc and NFR Security use sensor hardware from the same third-party supplier, Bivio Networks. The relatively modest performance results from both IPS systems in some tests might be caused by configuration settings on the sensor hardware, something both vendors discovered only after we'd wrapped up testing. On the plus side, both IPS systems stopped all attacks in our final round of testing.
Ambiron TrustWave and Demarc built their ipAngel-2500 and Sentarus IPS software around the open source Snort engine. The performance differences between them can be attributed to software and driver decisions made by the respective vendors.
Fortinet's FortiGate-3600 posted decent results in baseline tests involving benign traffic only, but forwarding rates fell and response times rose as we ratcheted up attack rates.
We should note that this is a test of IPS performance, not security. This is a test of IPS performance, not security. We didn't measure how many different exploits an IPS can repel, or how well. And we're not implying that just because an IPS is fast, it's secure.
Even so, security issues kept cropping up. As noted, no device passed initial testing without missing at least one exploit, disabling logging and/or going into a "fail open" mode where all traffic (good and bad) gets forwarded.
This has serious implications for IPS systems on production networks. Retesting isn't possible in the real world; attackers don't make appointments. Also, we used a laughably small number of exploits - just three in all - and offered them at rates never exceeding 16% of each system's maximum packet-per-second capacity. That we saw security issues at all came as a surprise.
The three exploits are all well known: SQL Slammer, the Witty worm and a Cisco malformed SNMP vulnerability. We chose these three because they're all widely publicized, they've been around awhile, and they're based on User Datagram Protocol (), allowing us detailed control over attack rates using the Spirent ThreatEx vulnerability assessment tool.
The IPS sensors we tested sit in line between other network devices, bridging and monitoring traffic between two or more Gigabit Ethernet ports. Given their inline placement, the ability to monitor traffic at high rates - even as fast as line rate - is critical. Accordingly, we designed our tests to determine throughput, latency and HTTP response time. We used TCP and UDP test traffic, and found significant differences in the ways IPS systems handle the two protocols (see How we tested IPS systems).
Vendors submitted IPS systems with varying port densities. FortiGate-3600 has a single pair of Gigabit Ethernet interfaces, while IPS 5500 has two pairs. The IPS systems from Ambiron TrustWave, Demarc, NFR and TippingPoint offer four port-pairs. To ensure apples-to-apples comparisons across all the products, we tested three times, using one, two and four pairs of ports where we could.
One port-pair
Our tests of single port-pairs are the only ones where all vendors were able to participate.
In baseline TCP performance tests (benign traffic only, no attacks), the Demarc, TippingPoint and Top Layer devices moved traffic at 959Mbps, near the maximum possible rate of around 965Mbps (see link to The IPS torture test, scenario 1, below). With 1,500 users simultaneously contending for bandwidth and TCP's built-in rate control ensuring fairness among users, this is about as close to line rate as it gets with TCP traffic.
It was a very different story when we offered exploit traffic, with most systems slowing down sharply. The lone exception is ipAngel, which moved traffic at rates under heavy attack that were equal to or better than its rates in the baseline test. All others slowed substantially under heavy attack - and worse, some forwarded exploit traffic.
The IPS 5500 leaked a small amount of Witty worm traffic at all three attack rates we used - 1%, 4% and 16% of its TCP packet-per-second rate. The vendor blamed a misconfiguration of its firewall policy (vendors configured device security for this project). With its default firewall policy enabled, Top Layer says its device would have blocked exploits targeting any port not covered by the vendor's Witty signature.
The TippingPoint 5000E leaked a small amount of malformed Cisco SNMP traffic when it was offered at 4% and 16% of the device's maximum forwarding rate, even after we applied a second and third signature update.