A security evangelist shares his best practices

Anyone who has the word “evangelist” in his business title must really love his job. This week, John Linkous, Security and Compliance Evangelist at eIQnetworks shares his best practices for information security.

For this week’s newsletter, I reached out to eIQnetworks’ Security and Compliance Evangelist, John Linkous. eIQnetworks  is the maker of SecureVue, a comprehensive security, log management and compliance automation software package for the enterprise. The new 3.2 version of SecureVue offers a 6-tier scalable architecture, enabling the security product to manage global security for the world’s largest enterprises. With this architecture, SecureVue can process up to a million events per second.

In his role as evangelist, Linkous gets a worldwide perspective of network security issues. I asked him to share with us his five best practices for information security:

Know Your Assets. If you don’t know what you have, you can’t manage it. Consequently, it’s critical for information security managers to have complete, up-to-date knowledge of their information assets, from infrastructure devices, to servers and workstations, peripherals, and data repositories such as databases and e-mail systems. While most information security organizations can identify what they know about their technology assets, it’s just as critical for them to have visibility into what’s not expected: the new device that suddenly shows up on the network; the unexpected wireless access point; the unusual network protocols moving across the firewall. These unanticipated assets can introduce massive risks into the environment, including new attack vectors that can be exploited.

Reduce the “Noise Level” of Information Security Monitoring. Information security is a discipline based on discovering the unusual. While it’s easy to marshal the forces of an incident response team to address something obvious – say, a network worm that’s propagating throughout the environment – it’s not as easy to address seemingly more esoteric abnormalities, such as failed logons.

In a large enterprise on a typical Monday morning, security monitoring teams may see dozens, perhaps even hundreds of failed logons from employees who have “fat-fingered” their credentials. Unfortunately, most organizations don’t have the resources to track down each and every failed logon to determine if it was accidental or malicious. Instead, they acknowledge the event in their console – but of course, that’s not really security.

What if one of those failed logons was the first step in a slow brute-force credential attack? Without the ability to reduce the “noise level” of security monitoring by eliminating the false positives, security teams aren’t practicing true security.

One solution to this problem is correlation. For example, if a security monitoring team member were able to trigger an alert not on every single failed logon, but only those logons onto systems which subsequently experienced a successful logon followed by a high-privilege event – such as the creation of a new user account, or the installation of new software – the security engineer would be able to establish context around the initial failed logon event, perhaps providing the proverbial “needle in the haystack.”

Conduct Quantitative Risk Management. Bad things happen to technology all the time: from hanging operating system processes, to attacks from inside and outside the network, to unauthorized configuration changes -— every type of technology is exposed to risk. The key is understanding where those risks are, and most importantly, how they impact the business.

Quantitative risk management for information security takes real data directly from technology assets – configuration changes, vulnerability profiles, unusual system activity, and other pieces of information – and assigns specific levels of risk to them to give security managers and risk professionals an understanding of how exposed a system, application, or database is to risks. Using this information, security managers can mitigate these risks by implementing new and better security controls, reducing the likelihood of impact to the business in the process.

Converged Security Information. Most organizations use a combination of information security tools to monitor and enforce security policies. From anti-virus software, to vulnerability scanners, to a range of tools with acronyms like DLP, NAC, IDM, IDS/IDP, and SIEM, security professionals must be familiar with the tools they need to maintain security.

Unfortunately, this also means that they must frequently change consoles to view all of this data, and more importantly, these individual point solutions don’t “talk” to each other, meaning that security professionals are relegated to manually correlating data between them. Without a comprehensive, holistic view into all of the security data relevant to the enterprise, security teams face a communication gap that results in reactive, rather than proactive security.

Enterprises with multiple security point solutions should consider technologies that allow them to converge all types of security data – logs, configuration and asset data, network flows, vulnerabilities, and performance metrics – into a single, integrated console to both drive efficiency, and change security from a reactive to proactive discipline.

System Configuration Standardization and Auditing. Both malicious attackers and malware know that the low-hanging fruit of attack vectors is found in misconfigured operating systems, network devices, and applications; in fact, an entire industry has been built around technical security controls such as patch management and access control management.

Security managers need to walk the tightrope of ensuring that their systems are configured in a manner that provides the functionality that the business needs, while making certain that sensitive information is adequately protected. But building secure systems is only half the story; security professionals need to continuously monitor these systems to ensure that their configurations remain secure. Without continuous, centralized configuration auditing, systems can easily “drift” out of their secure configurations as updates are made.

Linda Musthaler is a Principal Analyst with Essential Solutions Corporation. You can write to her at LMusthaler@essential-iws.com.

Learn more about this topic

Notes from Digital ID World

Microsoft blasts Google over Chrome Frame plug-in

PCI survey finds some merchants don't use antivirus software

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Related:

Copyright © 2009 IDG Communications, Inc.