This column is available in a weekly newsletter called IT Best Practices. Click here to subscribe.
One of the weakest links in security systems is end user credentials. They are often abused by their legitimate owners, and stolen by malicious actors. The 2014 Verizon Data Breach Investigations Report revealed that 88% of insider breaches involve abuse of privileges, and 82% of security attacks involve stolen user credentials.
An external attacker might use a stolen set of credentials to make the initial infiltration of a network, to make lateral movements inside the network to gain access to sensitive data or information, or to exfiltrate data to complete the breach. This type of activity is hard to detect because the credentials themselves are legitimate—they are just being used the wrong way.
Obviously traditional perimeter security can't catch this type of activity because the credentials take the actor past the scrutiny of those defenses. The only effective solution to detect insider attacks – whether the actor is a true insider like an employee or contractor or someone using stolen credentials – is user behavioral analytics (UBA). (Gartner uses the term "user and entity behavior analytics" to indicate that the behavior can belong to a device as well as to a human.)
UBA is growing more sophisticated, using machine learning and big data analysis to precisely identify true malicious activity on a network. The challenge for such solutions is to optimize the use of security analysts' time by avoiding false positives and by giving them complete context when activity is genuinely thought to be malicious.
Fortscale has upped its game in the UBA space with a new release that focuses on those two areas in particular: reducing false positives and providing in-depth insight on anomalies and indicators so a security analyst has everything in one place to conduct an investigation.
Fortscale's UBA solution operates in four stages. The first stage involves ingesting user access data. Rather than deploying agents on the endpoints or other proprietary data collectors, Fortscale ingests data from existing logs that track user access. The data might come from a SIEM or other similar system. Fortscale is focused on profiling user access – when, where and what kind of access a user had over time – and this is information that is already commonly collected and stored in logs. All of the data goes straight into an on-premise Hadoop database. Fortscale also takes contextual data from directory services in order to understand who the users are and what access privileges they legitimately have.
The next step is to take all of this user/entity access information and create a baseline profile for each user. This profile looks at users from multiple dimensions, such as what devices a person typically uses to access the network, what a person's typical work hours are, and where he uses a VPN to log into the network. The baseline builds up historical data so the system can see what is considered normal behavior over a duration of time.
The third stage uses data analytics to detect anomalous behavior. Fortscale does this in several ways. One is by comparing a user's current behavior to his historical baseline behavior. For example, maybe he logged in for the first time ever from a distant location at 2:00 AM. This could mean the person is traveling and can't sleep, or it could mean his credentials have been stolen and a malicious actor is using them.
Fortscale also compares a user's activities to those of his peers—not necessarily peers on the org chart, but people who perform the same kind of duties and job activities. Let's say the user is a code developer in India. Fortscale will compare his access activities to other developers in India, and perhaps to those in other locations as well. If the system detects that this particular user is accessing a specific server and no other developers access that same server, it could be considered anomalous.
Fortscale says it has put a lot of effort into its analytics capabilities to ensure the system understands what is malicious versus simply an unusual action by an employee, with the goal being to weed out false positives.
The result of all this comparison is a risk score for each event and for each user. These scores indicate the most suspicious activity that might require further investigation. And the latest release of the software has added Fortscale Smart Alerts, which package up anomalous events into threat indicators and alerts and then presents them via a dashboard in a prioritized manner. When a security analyst looks at a specific alert, he has the context, the insights and the conclusions with respect to why this anomaly is worth investigating.
For example, a security analyst can drill down on an alert to see how many indicators are part of that alert. Those indicators can be from different data sources and of different types. A single action can trigger an indicator. Because the baseline behavior of a user is known, if that user goes to a server that he normally doesn't access, the action is an anomaly and it becomes an indicator. Another indicator might show that user is targeting a high number of devices for access within a certain period of time. Yet another indicator might be a tag that indicates this user is a high privilege account. The machine learning determines that these indicators together are worthy of further investigation.
For each of the indicators Fortscale pulls together, the system also provides the raw data of the event as well as the normal behavior. An interesting example is the target device alert. Fortscale points out that a particular server is an anomalous target device for this user. The system shows the analyst graphically what the normal target devices are for this user. Because he is going to a different device that he normally would not visit, the system triggers an indicator. Fortscale can provide a different angle on this view by showing all of the users that typically access that particular device, and this helps the analyst understand the role or purpose of that device and put it into context with the user that has accessed it.
User behavior is difficult to predict. That's why deterministic rules don't work well when looking for suspicious activity. UBA is the best type of tool available today to detect malicious insider activity.