How we tested the various intrusion prevention systems.Our "In the Wild" evaluation of network intrusion-prevention systems took place on a live, distributed network connecting three of our\u00a0Test Alliance\u00a0labs. The goal was to mix elements of a multi-site enterprise network with the inherent randomness of the Internet to see how these new IPS devices would behave.We started with our "sacrificial lambs," HP ProLiant DL330 servers running unpatched versions of Unix and Windows, and a Cisco router running V11.3 of IOS. These were put into data centers in Los Angeles (LAX) and San Jose (SJC). Each set of sacrificial lambs was protected by an in-line IPS, coexisting with other traffic in the same data centers. Because the IPS devices were installed in-line, we had to test them serially. Starting in September 2003, we evaluated one IPS device per week to see how each behaved while the Internet bucked and gyrated around us.Because this test took a full five months to complete, several of the vendors have upgraded their products since we tested them.For\u00a0management, each vendor was invited to send its management system to our network operations center in Tucson, Ariz. In some cases, vendors sent a full-blown management server. Other times, we got nothing more than a URL with instructions to download a client. In general, we discovered that multi-site and multi-unit management is not as advanced in the IPS world as it is in the\u00a0intrusion-detection system\u00a0and\u00a0firewall\u00a0business.In cases where out-of-band management was available, we hooked the management interfaces on the LAX and SJC sensors to an\u00a0IP Security\u00a0VPN we built between all three sites. In cases where in-band management was the only possibility, we simply drove these devices over the Internet. Only the management systems were given Internet access (through a NetScreen Technologies firewall) so they could download signature updates and patches as necessary; this turned out to be a problem with some products that expected the IPS device itself would be Internet accessible, a poor architectural choice on the vendor's part.We established a task list that a typical network professional likely would have when deploying an IPS. The task list started with basic configuration tasks, including integrating the devices into the existing network. Rather than build an artificial lab environment, we dumped these boxes into a live data center, sometimes with disastrous results. In two cases, the devices managed to take down part of the data centers themselves.We started with the most basic configuration for each device and management station: setting up basic multi-site management, logging servers and digital certificates for encrypted communications, for example.Next, we dove into setting up the IPS functions of the devices. We wanted to set them up using as many of their capabilities as possible to protect our systems. We tried to use the network discovery features of each device. If a scan was possible, we ran it. If statistics on performance were available, we used them. If signatures were there, we turned them on (unless the vendor provided a recommended baseline configuration, which several did). Additionally, we had our own whitelist and blacklist addresses and services that we tried to put into each device. And if we could turn on blocking, we did so.The best IPS devices came with a methodology attached, usually calling for days or even weeks of testing, analysis and configuration before turning the products on in full blocking mode. Because of the short window to evaluate each product, we occasionally had to short-circuit the vendors' own recommendations on implementation methodology by bludgeoning the devices into place.As the inevitable attacks showed up, we looked at alerting and monitoring facilities. We wanted to see which devices gave us data and with what level of detail. Finally, at the end of the week for each IPS, we evaluated reporting and aggregation facilities.