Building an insider threat program that works – Part 1

Lessons learned from the front lines of insider threat risk management

Building an insider threat program that works – Part I
Credit: The New York Public Library

Most public agencies and private enterprises have a large and growing digital footprint, increasing their vulnerability to theft, sabotage and other malicious threats from trusted personnel. This underscores the urgent need for more effective management of both information security and risk from insiders.

The consequences of failure range from failed security audits and interruptions of service or product deliveries to more significant degradation of ongoing operations, monetary losses and lasting reputational damage. In extreme scenarios, there is even the potential for bodily injury and loss of life.

+ Read Part 2 of Building an insider threat program that works

In response, many corporate and government leaders have invested heavily over the past few years in controls designed to mitigate the likelihood and consequences of a damaging insider event. Policy and procedural controls naturally have played a big part in these nascent insider threat programs, but so have a number of emerging technologies grouped under the umbrella of Security Analytics.

Given the high capital and personnel costs of such technology investments, the central question is whether they are having a significant positive impact. Based on my experience the answer is mostly "No."

Lessons learned

Public- and private-sector organizations I have spoken with zero in on the following lessons learned:

Big-data solutions are inadequate on their own

An insider threat program will fail if it is based solely on the outputs of rules-based or machine-learning systems monitoring network activity. Outputs from rules-based systems do well at flagging anomalies based on known behavior but also tend to be too coarse-grained for the threats they are trying to detect, leading to a proliferation of red flags (most of them false positives) that overwhelm analysts.

As more rules are manually added to manage emerging exceptions, these too become unwieldy over time. Machine learning systems work better with unstructured data—and they can ease workloads by building libraries of rules on the fly—but the systems have to be constantly trained and retrained by experts and don't work well on weak signals, in black-swan scenarios or with many of the latest wave of emerging asymmetric threats. As the volume, velocity and variety of threats has increased, the limitations of these data-driven systems have become all too apparent: by the time a threat is detected, the attack often has already occurred.

The analyst reasoning process must be automated

What all big-data systems lack is expert human judgment. To be truly effective, security analytics solutions must "reason" the way the best analysts do—by assembling many pieces of disparate information and fusing them into a composite risk picture. Given the scale and speed of the incoming data being analyzed, insider threat analytics obviously must be able to automate much of this reasoning process, allowing the system to scale to process millions of events continuously as though they’d been individually evaluated by a team of experts.

Cast a wider net for threat signals

Existing enterprise systems contain a wealth of data that can provide key insights and indicators to enhance the overall signal. That means taking advantage of internal sources like badge scans and HR records, in addition to existing network monitoring and detection tools. Even external and third-party sources – for example, bankruptcy, divorce and arrest records, as well as open-source data from social media and news outlets – should be tapped for evidence that bolsters sometimes weak internal signals.

Scalability is more than a matter of computing capacity

It is easy to say a modern analytic system must be designed for scale. But for insider threat and network and corporate security programs, the system also must be designed to minimize the number of analysts required to investigate alerts and mitigate risks.

Reducing false positives and focusing analysts on the most important risks, while absorbing an ever-increasing amount of data, requires sophisticated reasoning algorithms. These can fuse a wide variety of data types to provide the context needed to rapidly identify serious threats without generating high volumes of nuisance alerts.

Avoid black boxes and walled gardens

Many of the big data systems being deployed today are closed-loop or black box solutions, meaning the underlying analytic processes and algorithms remain unknown to the user. Insider threat cases are sensitive personnel and corporate security issues, and any deployed system must provide transparency into what factors raised an individual’s risk profile, and when.

Organizations that take proactive steps to mitigate risks must be able to explain and defend how and why they arrived at their decision. Likewise, in order to tap into multiple data sources, the solution should provide API access not just to bring data in, but to provide a means for sharing the solution's risk insights with other enterprise systems.

In Part 2 of this post, I examine three key "must-haves" for a successful insider threat program.

This article is published as part of the IDG Contributor Network. Want to Join?

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Must read: Hidden Cause of Slow Internet and how to fix it
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.