Insider threats can't mask their behavior from Gurucul's risk analytics

Security tool looks at an identity's behavior from multiple dimensions to pinpoint truly risky activity

This column is available in a weekly newsletter called IT Best Practices.  Click here to subscribe.  

The phrase "insider threat" tends to invoke thoughts of cases like NSA contractor Edward Snowden and Galen Marsh, the Morgan Stanley financial advisor who stole data from 350,000 clients. However, a data thief doesn't have to work for an organization to be an "insider." There's a growing threat from cyber criminals who obtain privileged access to a secure network to steal data or intellectual property.

Regardless of the nature of how a miscreant gets inside access to systems, Gurucul aims to use identity-based intelligence to root out suspicious behavior and other advanced threats. Gurucul's security platform is based on predictive identity based behavior anomaly detection, and identifying insider threats is just one of many use cases for the platform.

Gurucul's team believes that identity – either compromised or misused user identity – is the root cause of many modern day threats. Identity is the underlying threat surface that Gurucul uses for its risk analysis.

Gurucul’s approach starts by pulling identity information from a directory service, an identity and access management (IAM) platform, or an HR system—wherever an organization keeps people and account information. The next step is to build a multi-dimensional contextual identity by overlaying access, activity, alerts and intelligence information onto the identity.

The access information comes from end systems directly or from connectors into third party IAM systems. The activity information comes from log sources, directly from end systems. Gurucul pulls active alerts from other security solutions such as data loss prevention (DLP), malware detection systems, firewalls, etc. And finally, a critical piece is to overlay intelligence information from third party sources and Gurucul's own R&D team. This last component helps in developing libraries and algorithms for threat patterns. All of this information is used to build context around an organization's official list of identities—the people that should legitimately have access to inside systems.

The next step is to use machine learning algorithms on this dataset. This includes behavioral profiling algorithms, which look at every new transaction coming in and compare that identity's new transactions against a normal, baseline behavior. An identity's actual behavior – say, accessing a system that holds confidential financial data – is compared to the identity's baseline behavior to develop a risk score. If the person behind the identity works in the Finance department, the actual behavior might have a low risk score, but if this person works in Marketing, or Manufacturing, the behavior might trigger a higher risk score.

But Gurucul doesn't stop there in doing its behavioral analysis. The vendor goes to the next step of building dynamic peer groups to compare an identity's behavior to the normal behavior of other people in its peer group—people with the same job title, in the same department, at the same location, etc. Continuing with the example above, if the identity that's accessing the confidential financial database belongs to someone in Marketing, and everyone in the Marketing department routinely accesses that same database, the risk score moderates a bit.

The use of peer group behavioral data helps to weed out false positives. It also helps to highlight aberrant behavior. For example, suppose an employee has been slowing stealing intellectual property for the past 3 months. Initially this might be perceived as normal behavior because that identity has been performing that activity over a long time period. However, the peer group comparison would point out that no one else in that identity's peer group is accessing this data, so the risk score might get elevated. Gurucul notes that it has found some very interesting APT situations via this technique.

A third step in building the identity's risk score is with machine learning algorithms using self-learning and self-tuning techniques, custom policies, and out-of-the-box rules. For example, an organization might want a rule that says the company's proprietary research information should not be accessed from outside the country and nobody except a specific group of users should have access. Therefore if any activity on this data set is outside those parameters, it should be considered a higher risk.

The end goal is to have one dynamic risk score for an identity that is computed from various sources of contextual awareness plus machine learning algorithms. If an identity is taken over or abused, the risk score will climb and generate alerts.

Recognizing that data and logs are increasingly in the cloud, Gurucul extends its information gathering capabilities to the cloud. There is integration with common SaaS applications, including Salesforce, Workday, Office365 and others.

With out-of-the-box connections to data sources like directory systems and log aggregators to pull in the identity information and everything else that is being collected, Gurucul claims its platform is straight-forward to implement. They say that customers typically start seeing value from the solution in a matter of weeks.

Gurucul provides this example of a real-life customer scenario. A manufacturing company discovered on the second day of use of Gurucul's risk analytics that two of their research accounts had been hijacked. Data had been leaking out of the company for a while. Further analysis of log data confirmed the previously unknown breach that Gurucul uncovered. Gurucul was able to tell the company where the attacker's activities were coming from, where he was VPNing in from, what hours the activity took place, what kind of activity the attacker was doing, and how many downloads of the IP sources had happened.

According to Gurucul, a high number of hijacked accounts have been privileged accounts. The vendor notes that not all privileged accounts get correlated to a specific person. Gurucul identifies privileged accounts that have been orphaned when people leave the company. When those accounts remain unmanaged, they are quite risky.

Behavior analysis is becoming more popular as a means for looking for insider threats and other types of network intrusions. Gurucul goes to great lengths to look at the identity of the person behind the suspicious activity in multiple dimensions to reduce false positives and really understand when someone with inside access poses a real risk to data and information security.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Related:
Must read: Hidden Cause of Slow Internet and how to fix it
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.