Results indicate high performance doesn't always mean high security.
High-end intrusion-prevention systems move traffic at multigigabit rates and keep exploits out of the enterprise, but they might not do both at the same time. In lab tests of top-of-the-line IPS systems from six vendors, we encountered numerous trade-offs between performance and security.
Several devices we tested offered line-rate throughput and impressively low latency, but also leaked exploit traffic at these high rates. With other devices, we saw rates drop to zero as IPS systems struggled to fend off attacks.
In our initial round of testing, all IPS systems missed at least one variant of an exploit we expected they'd easily catch - one that causes vulnerable Cisco routers and switches to reboot. While most vendors plugged the hole by our second or third rounds of testing (and 3Com's TippingPoint 5000E spotted all but the most obscure version the first time out), we were surprised that so many vendors missed this simple, well-publicized and potentially devastating attack (see Can anyone stop this exploit?).
These issues make it difficult to pick a winner this time around (see link to NetResults graphic, below). If high performance is the most important criterion in choosing an IPS, the TippingPoint 5000E and Top Layer Networks' IPS 5500 are the clear leaders. They were the fastest boxes on the test bed, posting throughput and latency results more commonly seen in Ethernet switches than in IPS systems.
IPS usability is a mixed bag
The most important feature of an intrusion-prevention system is whether it does the job you bought it for. That said, it also needs to be usable, in the sense that it supports the network manager in the day-to-day tasks that go hand in hand with using an IPS in an enterprise setting. After shaking out the IPS products for performance, we took them back into the test lab to look at them from another angle entirely: usability.
The clear winner in terms of usability was 3Com TippingPoint's Security Management System used to drive the TippingPoint 5000E, a product that turned in above-average performance on every task we set. Honorable mentions go to NFR Security's Sentivist Management Platform used to control its Sentivist boxes and Top Layer Networks' IPS 5500, which are products anyone trying to manage an IPS would find meet their needs easily, with a minimum of wasted effort.
For a full discussion of this usability testing, see >>.
Of course, performance isn't the only criterion for these products. The 5000E leaked a small amount of exploit traffic, not only in initial tests but also in two subsequent retests. TippingPoint issued a patch for this behavior two weeks ago. The 5000E also disabled logging in some tests. That's not necessarily a bad thing (indeed, TippingPoint says customers prefer a no-logging option to a complete shutdown), but other devices in the same test kept logging at slower rates.
The IPS 5500 scored well in tests involving TCP traffic, but it too leaked small amounts of exploit traffic. Top Layer attributed this to its having misconfigured the firewall policy for this test.
IPS systems from Demarc and NFR Security use sensor hardware from the same third-party supplier, Bivio Networks. The relatively modest performance results from both IPS systems in some tests might be caused by configuration settings on the sensor hardware, something both vendors discovered only after we'd wrapped up testing. On the plus side, both IPS systems stopped all attacks in our final round of testing.
Ambiron TrustWave and Demarc built their ipAngel-2500 and Sentarus IPS software around the open source Snort engine. The performance differences between them can be attributed to software and driver decisions made by the respective vendors.
Fortinet's FortiGate-3600 posted decent results in baseline tests involving benign traffic only, but forwarding rates fell and response times rose as we ratcheted up attack rates.
We should note that this is a test of IPS performance, not security. This is a test of IPS performance, not security. We didn't measure how many different exploits an IPS can repel, or how well. And we're not implying that just because an IPS is fast, it's secure.
Even so, security issues kept cropping up. As noted, no device passed initial testing without missing at least one exploit, disabling logging and/or going into a "fail open" mode where all traffic (good and bad) gets forwarded.
This has serious implications for IPS systems on production networks. Retesting isn't possible in the real world; attackers don't make appointments. Also, we used a laughably small number of exploits - just three in all - and offered them at rates never exceeding 16% of each system's maximum packet-per-second capacity. That we saw security issues at all came as a surprise.
The three exploits are all well known: SQL Slammer, the Witty worm and a Cisco malformed SNMP vulnerability. We chose these three because they're all widely publicized, they've been around awhile, and they're based on User Datagram Protocol (UDP), allowing us detailed control over attack rates using the Spirent ThreatEx vulnerability assessment tool.
The IPS sensors we tested sit in line between other network devices, bridging and monitoring traffic between two or more Gigabit Ethernet ports. Given their inline placement, the ability to monitor traffic at high rates - even as fast as line rate - is critical. Accordingly, we designed our tests to determine throughput, latency and HTTP response time. We used TCP and UDP test traffic, and found significant differences in the ways IPS systems handle the two protocols (see How we tested IPS systems).
Vendors submitted IPS systems with varying port densities. FortiGate-3600 has a single pair of Gigabit Ethernet interfaces, while IPS 5500 has two pairs. The IPS systems from Ambiron TrustWave, Demarc, NFR and TippingPoint offer four port-pairs. To ensure apples-to-apples comparisons across all the products, we tested three times, using one, two and four pairs of ports where we could.
Our tests of single port-pairs are the only ones where all vendors were able to participate.
In baseline TCP performance tests (benign traffic only, no attacks), the Demarc, TippingPoint and Top Layer devices moved traffic at 959Mbps, near the maximum possible rate of around 965Mbps (see link to The IPS torture test, scenario 1, below). With 1,500 users simultaneously contending for bandwidth and TCP's built-in rate control ensuring fairness among users, this is about as close to line rate as it gets with TCP traffic.
It was a very different story when we offered exploit traffic, with most systems slowing down sharply. The lone exception is ipAngel, which moved traffic at rates under heavy attack that were equal to or better than its rates in the baseline test. All others slowed substantially under heavy attack - and worse, some forwarded exploit traffic.
The IPS 5500 leaked a small amount of Witty worm traffic at all three attack rates we used - 1%, 4% and 16% of its TCP packet-per-second rate. The vendor blamed a misconfiguration of its firewall policy (vendors configured device security for this project). With its default firewall policy enabled, Top Layer says its device would have blocked exploits targeting any port not covered by the vendor's Witty signature.
The TippingPoint 5000E leaked a small amount of malformed Cisco SNMP traffic when it was offered at 4% and 16% of the device's maximum forwarding rate, even after we applied a second and third signature update.
Further, with attacks at the 16% rate, the TippingPoint device disabled all alerts (it continued to block exploits but didn't log anything) for 10 minutes. TippingPoint calls this a load-mitigation feature and says customers overwhelmingly prefer this setting to having the device shut down if it becomes overloaded.
We understand that device behavior during overload is ultimately a policy decision. For enterprises where high availability trumps security, the ability to continue forwarding packets is essential - even if it means a temporary shutdown of IPS monitoring. More-paranoid sites might block all traffic in response to an overload. In this test, the TippingPoint and NFR devices (and possibly others) explicitly give users a choice of behaviors, a desirable feature in our view.
In terms of HTTP response time, NFR's Sentivist Smart Sensor delivered Web pages the fastest, at an average of about 144msec for an 11KB object. This is the average time it took for each of 1,500 users to request and retrieve a Web page with a single 11KB object, with no attack traffic present. The NFR sensor also flew through the 1% and 4% attack tests, with response times lower than those for all other vendors' baseline measurements.
Something went horribly wrong for the Sentivist device in the 16% attack test, however, with response times registering nearly 80 times higher than in the baseline test. It could be simply an anomalous result; response time didn't increase nearly as much in the two and four port-pair tests on the Sentivist device. Further, the device's latency spiked only when hit with exploit traffic at more than 60Mbps, suggesting a serious and dedicated denial-of-service (DoS) attack was underway. After we concluded testing, NFR says it identified and corrected a CPU oversubscription issue, but we did not verify this.
Among other devices, ipAngel's response time degraded the least as we ratcheted up attack rates. This isn't too surprising, considering its sensor's powerful hardware the vendor supplied for testing. The ipAngel sensor had eight dual-core Opteron CPUs.
It's important to note that all results presented here are averages over the three-minute steady-state phase of our tests. These averages are valid, but they don't tell the whole story. As dramatic as the reduction in the average performance was in some tests, actual results over time show an even sharper drop in response to attacks (see link to TCP forwarding rates under attack, below).
All IPS systems slowed traffic to some extent under our heaviest attack, but the degradation differed in terms of degree and duration. ipAngel's rates degraded the least, although the rate at the end of the test for this product was 824Mbps, more than 100Mbps lower than the system's 929Mbps rate at the beginning of the test. Top Layer's IPS 5500 did the best job of bouncing back to its original rate after an attack, but even so it momentarily slowed down traffic by more than 550Mbps, to less than 400Mbps. Whether users would notice this slowdown depends on the application. Something involving sustained high-speed data transfer (for example, FTP) would experience a brief slowdown.
The TippingPoint 5000E's rates dipped to 10Mbps under attack, down from around 400Mbps, and it's even worse for the others, with rates going down all the way to zero. The Demarc and NFR numbers suggest an overload, while the Fortinet device appears to recover, then falter again.
Response time - the interval between a client requesting and receiving a Web page - is only a few hundred milliseconds in baseline tests. Under our heaviest attack, however, many IPS systems introduced delays running well into the seconds. Ambiron TrustWave and Top Layer IPS systems did the best job of maintaining low and consistent response time under attack.
These results show that IPS devices have the potential to cause significant delays in network performance, way out of proportion to the amount of malicious traffic in the network. In effect, an IPS could be the instrument that delivers a self-inflicted DoS attack, where a small amount of attack traffic can make a gigabit network painfully slow for Web traffic and completely unusable for file and print service.
After testing concluded, Demarc said new performance parameters in the Bivio sensor hardware it uses would have dramatically improved its numbers. Unfortunately, time constraints prevented us from verifying that.
We also measured UDP throughput. We consider the UDP data less important than the TCP data, because UDP typically is a much smaller percentage of traffic on the Internet side of production networks, but these tests still are a useful way to describe the absolute limits of device forwarding and delay. If you plan to put the IPS deep in your network, UDP traffic from sources such as backups or storage servers could form the bulk of your traffic.
Most devices moved midsize and large UDP packets at or near the theoretical line rate. The two exceptions were FortiGate-3600, which moved midsize packets at about 50% of line rate, and ipAngel, which moved UDP traffic (for all packet lengths) at far lower rates than it moved TCP traffic. Ambiron TrustWave says its sensor used betas of interface device drivers and later versions show higher throughput and lower latency with UDP; we did not verify this.
As in the TCP tests, latency in the UDP testing also spiked sharply when we subjected most IPS systems to attack, with hundredfold (or more) increases in delay not uncommon. The only exception was ipAngel, which delayed packets by roughly the same amount in the attack tests as in the baseline test. This could be attributable to the ipAngel's UDP throughput, which is much lower than that of the other devices in this test.
We gave all vendors an opportunity to review and respond to test results before publication. TippingPoint found in internal testing that latency would have been far lower had we measured at 95%, not 100% of the throughput rate. Top Layer asked for a smaller reduction in load (perhaps to 99.9%) and attributed its increased UDP latency to clocking differences between our test tools and its IPS.
While lower loads probably would have produced lower delays, we respectfully disagree with both vendors' suggestions, on two grounds. First, as described in RFC 2544 - the industry standard for network device performance benchmarking - latency is measured at the throughput rate and not at X percent of the throughput rate, where X is some number that produces "good" latency.
Second, neither vendor's device bears a sticker warning customers that rates should never exceed X percent of line rate. If vendors want to claim high throughput, they also should measure latency at the throughput level.
In baseline TCP performance tests, the IPS 5500 was the fastest device, with the TippingPoint 5000E not far behind (see link to The IPS torture test, scenario 2, below).The Ambiron TrustWave, Demarc and NFR devices all moved TCP traffic at rates much further below the theoretical maximum than in the single port-pair tests.
|The IPS torture test: scenario 2|
The Top Layer and TippingPoint devices also produced the highest rates in the attack tests, but results were problematic. The TippingPoint 5000E forwarded a small amount of Cisco exploit traffic in all three of our attack tests, and disabled logging in our 4% and 16% attack tests. The Top Layer device forwarded small amounts of Witty worm traffic in all three attack tests. The issues for both vendors were the same as in the single port-pair tests: TippingPoint had a problem with the Cisco signature, and Top Layer had a problem with its firewall configuration.
The Sentarus sensor and ipAngel were the fastest IPS systems among devices that did not forward any exploit traffic. The Sentarus came out on top when we offered attacks at 1% of the TCP rate, moving traffic at close to the baseline speed. The ipAngel was quickest in the 4% and 16% attack tests, though rates were about 10% and 25% lower, respectively, than in the baseline test.
HTTP response times also shot up dramatically under attack, though in some cases the delays were lower with two port-pairs than with one. This could be attributed to device architecture, in which IPS sensors use dedicated CPUs and/or network processors for each port-pair.
In the UDP tests, the TippingPoint and Top Layer IPS systems were again the fastest, moving midsize and large frames at line rate. The Demarc and NFR devices were about half that fast: Both posted identical numbers, possibly because both use the same Bivio sensor hardware.
UDP latency was higher under attack than in the baseline tests, especially for Sentarus in the 16% attack test. However, excluding that one result, latency generally rose less with two port-pairs under attack than with one - again, possibly caused by distributed processing designs.
With four pairs of Gigabit Ethernet interfaces (thus, rates theoretically capable of rising as high as 8Gbps), this was the acid test for IPS performance.
The TippingPoint 5000E was hands-down the fastest IPS in our TCP baseline tests (see link to The IPS torture test, scenario 3, below). It moved a mix of applications at 3.434Gbps, not far from the test bed's theoretical top rate of 3.8Gbps, and about twice as fast as the next quickest sensor, ipAngel.
|The IPS torture test: scenario 3|
In our attack tests, the TippingPoint 5000E again leaked small amounts of Cisco exploit traffic and also disabled logging in the 16% attack test.
Of devices with no security issues, ipAngel was fastest. As in tests with two port-pairs, ipAngel's TCP forwarding rates degraded as we ratcheted up attack rates, but on the other hand it did not leak any exploit traffic.
Most of the devices increased HTTP response time under attack, especially in the 16% attack test. In the worst case, response time through Sentarus spiked from 166msec in the baseline test to more than 15 seconds in the 16% attack test. That may have been attributable to a tuning parameter in the Bivio sensor, according to Demarc. Unfortunately, we learned about this parameter only after testing concluded.
TippingPoint's IPS was also the fastest in our UDP tests. In baseline tests it moved large packets at 4.454Gbps, the fastest single rate in our tests. It was also the top performer in baseline tests of short and medium-length packets.
Latency skyrocketed for multiple devices once we combined benign and attack UDP traffic. For example, the TippingPoint 5000E delayed benign UDP traffic by nearly 30 seconds in a test with attacks at 1% of its capacity, and the device also disabled logging in all three of our attack tests. The other products also slowed traffic by huge margins over the baseline test. The IPS with the best UDP latency under attack was Sentivist, not just with four port-pairs but indeed in all tests.
If the test results say anything, it's that performance and security are two very different goals, and - at least with these devices - the goals often may not bear any sensible relationship to one another.
These tests turned up two different kinds of IPS systems: devices that move traffic at very high rates, and devices that block attacks but aren't the speediest performers. Picking the right IPS comes down to finding the right balance between security and performance.
Newman is president of Network Test, an independent engineering services firm in Westlake Village, Calif. He can be reached at email@example.com.
Newman and Joel Snyder are members of the Network World Lab Alliance, a cooperative of the premier reviewers in the network industry, each bringing to bear years of practical experience on every review. For more Lab Alliance information, including what it takes to become a member, go to www.networkworld.com/alliance.
Thanks to all
Network World gratefully acknowledges the vendors that supported this project. Spirent Communications supplied its Spirent ThreatEx, Avalanche, Reflector, SmartBits and AX/4000 test tools, and engineer Chuck McAuley assisted with ThreatEx configuration. Apcon supplied an Intellapatch virtual patch panel that tied together the test bed. And Red Hat supplied its Red Hat Enterprise Linux operating system, used on test-bed management servers.
Learn more about this topicThree IPS products pass security evaluation tests
06/26/06Review of Arxceo's Ally IP IPS shows strong features, reduced manageability 03/06/06IPS: 6 hot technologies for 2006
The first transcontinental phone call took place 100 years ago between New York and San Francisco
An interesting explanation has emerged regarding why Microsoft curiously jumped from Windows 8 to...
Buyers of the earthly explanation for whatever fell from the sky in Roswell, N.M. back in 1947 are...
Sponsored by AT&T
Sponsored by Brocade
Big data represents a substantial asset for your organization, but it's also a potential liability....
CDO for almost three years, Derek Strauss has pushed through organizational change and launched many...
Wi-Fi is great when it works right – and when it’s secure. Although setting up Wi-Fi can seem...
Low-priced, lightweight Windows 8.1 device challenges Chromebooks in low-end notebook market.