Skip Links

Why don't risk management programs work?  

By , Network World
May 20, 2013 12:01 AM ET

Network World - When the moderator of a panel discussion at the recent RSA conference asked the audience how many thought their risk management programs were successful, only a handful raised their hands. So Network World Editor in Chief John Dix asked two of the experts on that panel to hash out in an email exchange why these programs don't tend to work. 

Risk management
Credit: Jeffrey Smith

Alexander Hutton is director of operations risk and governance at a financial services firm (that he can't name) in the Greater Salt Lake City area, and Jack Jones is principal and Co-Founder of CXOWARE, Inc., a SaaS company that specializes in risk analysis and risk management.

[ALSO: Why risk management fails in IT]

Jones: Risk management programs don’t work because our profession doesn't, in large part, understand risk. And without understanding the problem we're trying to manage, we're pretty much guaranteed to fail. The evidence I would submit includes:

* Inconsistent definitions for risk. Some practitioners seem to think risk equates to outcome uncertainty (positive or negative), while others believe it's about the frequency and magnitude of loss. Two fundamentally different views. And although I've heard the arguments for risk = uncertainty, I have yet to see a practical application of the theory to information security. Besides, whenever I've spoken with the stakeholders who sign my paychecks, what they care about is the second definition. They don't see the point in the first definition because in their world the "upside" part of the equation is called "opportunity" and not "positive risk".

* Inconsistent use of terminology. This relates, in part, to the previous point. If we don't understand the fundamental problem we're trying to manage then we're unlikely to firmly understand the elements that contribute to the problem and establish clear definitions for those elements. I regularly see fundamental terms like threat, vulnerability, and risk being used inconsistently, and if we can't normalize our terms, then there seems to be little chance that we'll be able to normalize our data or communicate effectively. After all, if one person's "threat" is another person's "risk" and yet another person's "vulnerability", then we have a big problem. How much credibility would physics have if physicists were inconsistent in their use of fundamental terms like mass, weight and velocity?

* The Common Vulnerability Scoring System (CVSS) is my favorite whipping post, but only because it's perhaps the most widely used model. There are others that are just as bad, if not worse. CVSS, for example, claims to evaluate the risk associated with its findings, but nowhere in its measurements or formulas does it consider the likelihood of an attack. Without that variable, it misses the mark entirely. It has other problems too -- complex math on ordinal values and accounting for variables in the wrong part of their equations, etc. At least the folks who oversee CVSS recognize some of its problems and are trying to evolve it over time.

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News