As this newsletter hits the wire, I will have just contributed my part to a panel discussion at the RSA Conference on the subject of IT security metrics. Security and compliance metrics are becoming a topic of definite visibility, but - as I imagine our panel, and our audience, will agree - the quantitative measurement of security and compliance is not without its challenges.Conveniently (for our panel discussion, not for CISOs), a rather unique case in point that illustrates these challenges was presented in recent weeks, with the Nyxem attack (a.k.a. Kama Sutra and several other handles, in addition to its formal MITRE Common Malware Enumeration designation as CME-24). One of the interesting behaviors of this threat is that infected systems generate a single HTTP request for the URL of a Web site functioning as an infection counter. This would seem to present an unusual opportunity to measure - quantitatively - the spread of an infection as it happens, directly indicating the prevalence of this attack.Of course, the obvious complication to using such a metric is that it can be distorted by a number of factors. Those who simply visited the site were added to the total, causing the number to escalate substantially as press coverage of the threat and word-of-mouth spread. Denial-of-service and other attacks intended to retaliate against the attacker(s) also distorted the total.Conversely, downward distortions include contacts from multiple hosts connecting from behind a single network address translation (NAT) address, while the variability of DHCP nodes represents another distorting factor. Last but certainly not least, there is the question of whether or not we should give credence to the integrity and accuracy of an attacker reporting hits of their own malware.Do all these factors together mean that this metric was unreliable? Without taking steps to differentiate useful data, yes. Distortion in the representation of the number of hits attributed to CME-24, combined with the destructive nature of its payload, played no small role in the elevated level of concern over this threat in the days leading up to Feb. 3. After Feb. 3, a relative dearth of actual damage reports led many to believe that this threat was over-hyped and concern vastly over-inflated in comparison with the actual result.Or was it? Who among us remembers, say, Y2K? Was the comparative lack of damage a consequence of too much hype - or was it instead a direct consequence of our deployment of countermeasures, motivated by the threat's measurable level of destructiveness as well as its prevalence indicated by a hit counter (even if one of questionable reliability)? Can we leverage this event to find metrics that measure our success in deploying mitigation that stopped this threat from widespread havoc?What we may be lacking in answering this question is consensus on what reliable metrics may be. Here again, however, the Nyxem event offers a constructive - and hopeful - example.In the days immediately following the event, CAIDA, the Cooperative Association for Internet Data Analysis, undertook a study of the Nyxem attack and performed a detailed differentiation of reliable counter hits from those that can safely be assumed to be unreliable. It posited a number of factors influencing the counter total and came up with an estimate of between just under half-a-million and a million infections.This does not, of course, measure the damage actually done by Nyxem, or the not-inconsiderable cost invested in deploying countermeasures against the threat. But it does indicate that conscientious analysis resulting from cooperative effort can yield useful metrics, even from data of questionable provenance.We are still quite a ways, of course, from saying we have anything like an actuarial basis for managing IT risks. But approaches such as these are setting precedents that illustrate how we can make progress toward metrics that can help support an IT security and compliance risk management strategy.