A recurring theme in cybercrime protection is ROI. It usually refers to how much effort — in terms of time and money — a thief must throw at a potential victim compared with the likely value of what could be obtained. Simply put, a thief can justify spending a lot more effort breaking into Fort Knox than stealing a six-year-old’s sweater.
But ROI has an entirely different implication in today’s cybercrime prevention efforts. The most potentially devastating players in cyberattacks are, of course, insiders: employees and contractors who have legitimate access, but can exceed that access to either engage in fraud or to help someone else who is engaging in fraud.
That second ROI is the one that employees/contractors consider when contemplating engaging in naughtiness. It’s a triple consideration: 1) If I get away with it, how much money could I get?; 2) if I get caught, what are the likely consequences? (getting fired, sued and imprisoned being the most common); and 3) the most daunting one, what are the odds that I will indeed get caught?
When I hear IT people debating the deterrence issue, they invariably focus on the first two, completely ignoring the third, which is why they so often make the wrong decisions. IT generally thinks, “Who is going to risk going to prison for $x, especially because they’ll have to surrender that money anyway. On top of that, they’ll be fired, sued, will probably go bankrupt and risk divorce and public scorn. It’s unlikely they’ll ever have another good job. Who would be crazy enough to risk all that?” They then conclude that none of their trusted circle would take such a risk, so additional safeguards don’t happen, at least not in a serious way.
Not only can you not dismiss that third element (“What are the odds that I will indeed get caught?”), but it is probably the most persuasive to the soon-to-be insider accomplice (if not perpetrator). Someone considering a crime — especially one that he or she sees as potentially netting them millions of dollars — is going to be a terrible judge of true risk and will dramatically underestimate the odds of getting caught.
This is what criminologists point to when they argue about the limited impact of deterrence. If a criminal doesn’t think he’ll get caught, he won’t give any serious thought to the consequences of getting caught.
Therefore, the potential criminal is much more focused on the first: How much will I potentially get? By the way, in the same way that they will tend to underestimate the chances of getting caught, they will overestimate how much money this caper will likely yield.
Here’s more bad news: The question “How much will I likely get?” is getting trickier — and in a way that hurts IT. It used to be that the only value to be gotten from data was to sell it on the black market. That’s still a big part of the value, but ransom from blackmail is getting to be fairly lucrative, too, as Ashley Madison is making clear. (Don’t forget that this is all about perception in the mind of the perpetrators. It doesn’t matter if their perceptions are seriously flawed. They alone will decide if they will engage in naughtiness.)
Ashley Madison was publicly told to do something — fix its privacy-for-a-price program — or else the data would be published. The company refused, and the attackers made good on their threat. Potentially wayward employees/contractors will interpret that to mean that is more likely that companies will succumb to such threats and quietly pay the ransom. Whether that’s valid or not doesn’t matter. It’s what tempted employees are likely to conclude.
Employees who are seriously considering these crimes have, in all probability, already decided that they want to do it. They are simply trying to talk themselves into it. They’re looking for an excuse to assume that the payoff will be huge and the risk tiny.
This story, "Inside the head of your company’s cyber traitor " was originally published by Computerworld.