We live in the age of the data breach. It seems every newspaper and newscast reports yet another breach every day. The media outlets themselves have even become the targets of these attacks and data breaches.
However, many of the perceived causes of breaches and failures of technology are actually myths. These myths obscure a clear path to increased security and better risk management. Debunking these myths is an important step to improve the effectiveness of our security defenses against future breach attempts.
Myth : Most threats and attacks are sophisticated
With today’s advanced persistent threats, zero-day exploits and increasingly sophisticated targeted attacks, many think the attacks are too hard to stop. While there is no doubt that trying to stop these kinds of attacks is very difficult, the fact is that according to the 2013 Verizon Data Breach Report, a staggering 99% of all breaches were not highly difficult. According to the report, 97% could have been stopped with simple or intermediate controls.
While many of today's breaches do involve zero-day or other attack techniques, they almost always contain some element of rudimentary, garden variety attack vector that could and should be thwarted.
Myth: My technology is to slow, old or obsolete
This may be the single biggest myth in IT, let alone security. How many times have we heard “my computer did not function properly”? Other flavors of this myth include "my technology was too slow, too old, and out of date."
In security specifically, we live in a “next-gen” world. If there is a next-gen tool in a particular category, it is immediately considered better and makes the previous generation obsolete. Or so the myth goes. We hear about an attack being successful and immediately think we need a new tool or a new technology to prevent it from happening again.
We don’t think too much about why our present technology did not prevent or stop this new attack. Was it really a case of the technology being incapable of thwarting the attack? More often than not, an examination of the facts will show that the technology deployed could have successfully protected you if it wasn’t misconfigured. Misconfigurations are much more likely to be the reason for a data breach than obsolete technology.
Misconfigurations could involve a firewall setting allowing traffic to or from a specific IP or via a port that should have been closed. Misconfigured network settings are a major source of data breaches. Who has permission to access what files and assets on the network? There could also be a misconfiguration on a server, such as incorrectly set file permissions.
Misconfiguration can also take the form of a setting on an endpoint that resulted in a patch or remediation not being applied. For instance, it could be as simple as having automatic updates turned off, preventing a new patch from being deployed.
Again, the Verizon Data Breach Report and other data breach studies show that sensible low- and mid-level controls and proper configuration of existing security technology are adequate to stop the overwhelming majority of attacks.
Human error is responsible for many more data breaches than older technology. That is not to say that technology doesn’t become obsolete. Of course, it does and that is sometimes the case. For instance, trying to maintain Windows XP systems after Microsoft has discontinued support could leave you vulnerable to attack. But that situation is far rarer than a simple misconfiguration.
Before blaming the technology, take a good look in the mirror and make sure that your perimeter devices, network, servers and endpoints are all configured correctly.
Myth: Network security controls are useless since all attacks target port 80 or layer 7
Oh, how the web app security vendors would love us to believe this one. However, this is another myth about data breaches. While many attack attempts come in via port 80, this does not mean that existing technologies in network security could not be used to block them.
A firewall, for example, can be used to stop attacks even with port 80 or other common ports left open. Blocking via IP, whitelisting IPs, and other firewall configuration management tactics can block many application layer 7 attacks despite popular myths to the contrary.
Yes, application-specific defenses like NGFW, WAF and other layer 7 defenses are effective against these attacks (assuming they are properly configured), but if you don’t have the budget to afford these luxuries there is no need to throw in the towel—there is still a lot you can do. Tightening your network controls and doing all you can to avoid misconfigurations is a viable and surprisingly effective strategy.
Myth: If I keep my systems patched, I can prevent all breaches
If only this were true, what a simpler world this would be. The “I can patch everything, can’t I?” approach fails on several fronts. First of all, just staying on top of all of the patches that are released for the software you run in your organization can be a daunting task.
In most organizations, you don’t just apply a patch when it comes out. There is a quality assurance process where the patch is tested to make sure it does not break something else. By the time a new patch is tested and made ready to implement system-wide, there is already a new patch that must be tested and rolled out as well. While this may be a great form of job security, it is also like living on a hamster wheel. No matter how fast you run, it seems that with the sheer amount of patches you never catch up.
Of course, the other side of this dilemma is that these patches are all driven by the finding of vulnerabilities. So while a good chunk of your resources is tasked with testing and rolling out patches, another part of the team is out scanning and testing for vulnerabilities.
Scanning for vulnerabilities is not as easy as it used to be, either. With so many mobile and remote devices, they are not always on the network when you run your vulnerability scan. Tracking, scanning and testing for vulnerabilities can be a bigger job than patching. Between the two, you can rest assured that a substantial amount of your allocated budget and resources will be sunk.
Finally, remember even without the zero-day attack, and you stay on top of your vulnerability management and patching, the weakest link in your defense still sits behind the keyboard. Being socially engineered to giving up your password or installing some malware on your device could make all of your hard work and effort for naught.
So while patching and scanning is a form of job security for some and at the very least will keep you busy, it is not a cure for data breaches.
Myth: It’s impossible to prevent breaches; I should just concentrate on response
There is a very prevalent trend in the security industry that says data breaches and security incidents are unstoppable. Instead of putting so many resources into preventing a data breach, the tendency is to put resources into incident discovery and breach response.
As the American General in the Battle of the Bulge replied when asked to surrender, “Nuts!” Giving up and not trying to stop data breaches is not and never will be a successful strategy. One hundred percent prevention of data breaches may not be possible, but it doesn’t mean it is not worth trying.
There is obviously a balance that needs to be struck. We do need to discover security breaches as fast as possible. We do need a well-thought-out plan to respond to data breaches. However, let’s be very clear that the balance must tip in favor of stopping data breaches where possible and reasonable.
Stopping data breaches from occurring totally—while a worthy goal—is probably not possible. However, data breaches are, by and large, acts of opportunity. Understanding how they occur and separating the truth from the myths can make your chances of being the next victim of a data breach much less likely. Insight into the state of your network, implementing even basic controls and management can decrease the likelihood that your network will be breached.