The crew at the Information Technology Laboratory (ITL) of the National Institute of Standards and Technology (NIST) has recently published another valuable report for everyone interested in network security.\u201cAn Overview of Issues in Testing Intrusion Detection Systems\u201d is available as a PDF from:https:\/\/csrc.nist.gov\/publications\/nistir\/nistir-7007.pdfIn the July ITL Bulletin (which is available free by e-mail) editor Elizabeth Lennon summarizes the key findings of the report. Some of the highlights from her summary (paraphrased unless quoted):Authors:The report was written by Peter Mell and Vincent Hu of NIST's Information Technology Laboratory, and Richard Lippmann, Josh Haines, and Marc Zissman of the Massachusetts Institute of Technology Lincoln Laboratory.Purpose:\u201cThe results of quantitative evaluations of IDS performance and effectiveness would benefit many potential customers. Acquisition managers need this information to improve the process of system selection, which is often based only on the claims of the vendors and limited-scope reviews in trade magazines. Security analysts who review the output of IDSs would like to know the likelihood that alerts will result when particular kinds of attacks are initiated. Finally, R&D program managers need to understand the strengths and weaknesses of currently available systems so that they can effectively focus research efforts on improving systems and measure their progress.\u201dMeasurable IDS characteristics (for clarity in this abbreviated list, I have renamed some of these from the terms used by the authors):* Coverage: proportion of the known attacks recognized by the IDS \u201cunder ideal conditions.\u201d* False-alarms (false positives): how often the IDS incorrectly claims a normal transaction is an attack.* Detection rate: how often the IDS correctly identifies an attack.* Resistance to attacks directed at the IDS itself.* Capacity for high-bandwidth applications.* Correlation: \u201cThis measurement demonstrates how well an IDS correlates attack events. These events may be gathered from IDSs, routers, firewalls, application logs, or a wide variety of other devices. One of the primary goals of this correlation is to identify staged penetration attacks. Currently, IDSs have only limited capabilities in this area.\u201d* Generic identification: how well the IDS identifies attacks that are not included in signature files.* Identification and classification: \u201cThis measurement demonstrates how well an IDS can identify the attack that it has detected by labeling each attack with a common name or vulnerability name or by assigning the attack to a category.\u201d* Discrimination: ability to distinguish successful penetrations from probes.* NIDS Capacity Verification: Network IDS \u201cdemands higher-level protocol awareness than other network devices such as switches and routers; it has the ability of inspection into the deeper level of network packets. Therefore, it is important to measure the ability of a NIDS to capture, process, and perform at the same level of accuracy under a given network load as it does on a quiescent network.\u201d* Other measurements: \u201cThere are other measurements, such as ease of use, ease of maintenance, deployments issues, resource requirements, availability and quality of support, etc. These measurements are not directly related to the IDS performance but may be more significant in many commercial situations.\u201dThe report continues with the following topics:* IDS testing efforts to date (highly variable in \u201cscope, methodology, and focus\u201d).* IDS testing issues (difficulties in collecting and analyzing attack scripts, differences between signature-based vs anomaly-based IDS, network-based vs host-based IDS, and approaches to handling background traffic).* Recommendations for research methodology.The research was funded in part by Defense Advanced Research Projects Agency. Nice to see our tax dollars at work.