Americas

  • United States

Why network execs need to care about the applications on the network

Opinion
Feb 08, 20069 mins
Networking

* Is application and content security on your radar?

We have received lots of very interesting feedback to the recent newsletters about Gartner’s report on the IT profession in the year 2010. Some of you agree (albeit with some sadness) that Gartner is spot on with its predictions, while some disagree with other aspects. We’ll publish some of the feedback next week. This week though, I want to draw your attention to applications management and security.

Last week, Jim Metzler, vice president of Ashton, Metzler & Associates, wrote in Network World that network pros now have to sink their teeth into managing apps. We’ve touched on this subject before as network execs come under pressure to give more oomph to networked apps. Application acceleration and content security are also big pieces of what network pros have to deliver.

To broaden your education in this area, Network World is running a series of one-day IT Roadmap conferences with six tracks covering application acceleration; application and content security; wireless LAN and enterprise mobility; storage and data compliance; VoIP and collaboration; and network management.

In the run up to the first IT Roadmap that will take place on March 20 in Boston (click here for more details) I’ll be quizzing the analyst-keynoters of three of the tracks: application and content security, storage and data compliance, and VoIP and collaboration, about the hot issues in their areas and what we’ll learn at the conference. (The conference will also be traveling to Boston, Chicago, Dallas and the Bay Area – you can keep up to date with developments by pre-registering here.)

First up in the IT Education and Training hot chair is Nemertes Research analyst Andreas Antonopoulos, who heads up the application and content security track. (Andreas also writes our New Data Center Strategies newsletter.) Here is our Q&A:

Q: What will you discuss in your keynote?

A: Enterprises today are facing unprecedented challenges on multiple fronts. On the one hand, we have an explosion of threats and threat delivery modes, while simultaneously we have an explosion of connectivity, bandwidth and mobility. Few companies are vertically integrated any more, so the network has become the primary means by which companies co-ordinate their externalized supply chains, partner relations and customer interactions. In this environment, formerly clear distinctions between “inside” and “outside” are disappearing and the old approaches to information security are becoming increasingly obsolete. In simpler terms: the perimeter is Swiss cheese and the need to combat threat is higher than ever. This keynote will address the challenges companies are facing and look at some of the best practices to solve them.

Q: The description of the application and content security track says that enterprises are moving away from perimeter-based security and towards “defense-in-depth” architectures. What is meant by “defense-in-depth”?

A: Unlike a simple “perimeter” approach, security professionals have been talking for years about layering defenses. The basic concept is a bit like wearing lots of layers of clothes in cold weather: it works better than a single thick layer (the perimeter approach). The layered approach is more flexible and if you lose a layer you still have several more layers to rely on.

In security, if one layer fails you want to have another layer behind it. This makes it harder to penetrate a network and even harder to do so while remaining undetected. For most companies, however, a layered approach to security was cost prohibitive, so they simplified it down to a single layer, the perimeter firewall. As threats have increased, layering defenses make more and more sense from a cost/benefit (or cost/risk avoidance) perspective. In practical terms, this could be something like anti-virus in a gateway, in the mail server, on the desktop and on the mobile phone/PDA.

Q: This suggests that enterprises are moving away from perimeter-based security – what is the reason for this? Is perimeter-based security failing us?

A: Yes, the perimeter makes less and less sense. Where is the perimeter? In the past, it was “around the edges of the network.” Today the network extends applications to partners, suppliers and customers. It’s harder to find an “edge” to it. So as companies become more wired and more distributed and more mobile the perimeter becomes more and more porous. Eventually it shrinks back to surround just the data center. But beyond these problems, the perimeter was always a somewhat flawed concept because it did not provide any depth to your defenses. If someone is able to breach that single layer they are “inside” and free to roam anywhere in the internal network. Add to that the fact that most attacks come from the inside and you can see why this is not a good risk management approach.

I think the coup-de-gras for the perimeter came with the emergence of rapidly self-propagating threats. Once you had worms that could traverse the entire Internet in less than a day, the perimeter was no longer up to the task. Any laptop moving in and out of the enterprise could carry some horrible threat with it and re-infect everything.

Q: How real is the threat from within the enterprise and what are the top three things that network execs should do to safeguard data from internal threats?

A: Well, every year we get another FBI study saying that the insider threat represents more than 75% of attacks. And every year we ask IT executives about this problem: “Yes, we think it is a serious risk,” and “No we are not really doing anything specific about it.” So there’s been a disconnect between risk management policies and the source of the threat.

The reason I think is that it is so difficult to protect against an insider. They already have legitimate access, so an attack from an insider is a much smaller “deviation” from policy. Hard to detect. The insider has a lot of knowledge of the systems, the culture and the policies – they can exploit this knowledge to cause harm. Regulatory compliance has changed this equation somewhat by imposing requirements for “separation of duties.” But there will always be a catch-22 – “Quis custodiet ipsos custodes?” or “who will watch the watchers?” Certain groups such as system administrators and, ahem, security professionals have elevated privileges. It is very hard to control their actions.

Q: What are the areas that were overlooked by IT security and network professionals last year and why should they be top-of-mind for this year?

A: I think spyware was treated as just a small nuisance but it then emerged as a significant operational overhead. One IT executive I spoke to during my security research benchmark was unsure about the “true cost” of spyware. I suggested they run a search through their desktop help desk ticketing for the term “spyware” and did some analysis. A week later they sent me an e-mail to tell me about the results: they were horrified to discover that almost 40% of their desktop problem tickets were spyware-related. Since they didn’t have a classification for “spyware” in their system, this problem had flown under the radar. Since that discussion, we found that spyware costs were a significant driver for operational overheads across the board.

Q: What’s the cost of spyware and how should network execs protect the network against harm from it?

A: At first, the cost of spyware was huge. First of all, for many companies the browser is an application delivery platform for Web-based applications. Spyware can cripple the browser and that affects enterprise apps such as ERP, CRM of OWA (Outlook Web access). So clearly it was not something that could be ignored. But, at first, there were neither detection tools nor removal tools. In effect, the only remedy was to rebuild each affected desktop or laptop. Even with imaging software that could take 20 minutes or more. Remote users have to do this over the network, which takes even longer. So the cost varied very widely. It depended on the level of automation.

Companies with software distribution systems could push patches and operating system images out at a lower cost. But if you were caught with a manual process, you were toast. In the mean time, the situation has improved. The major anti-virus vendors started blocking and tackling spyware so there was a more proactive solution available. Spyware also drove adoption of patch management and software distribution systems.

Q: Wireless is another security concern. How far has wireless security advanced over the past year or so, and what else needs to be done?

A: Wireless security has advanced quite a bit. With the ratification of the 802.11i standard in June of 2004 we saw the emergence of interoperable implementations, such as WPA [Wi-Fi Protected Access] and more recently WPA2. The latter supports the AES encryption algorithm, which fulfills FIPS [Federal Information Processing Standard ] standards and allows adoption by the federal government and other high-security organizations.

The real problem is that there is still a big tradeoff between deployment simplicity and security. Simply stated, the most secure deployment is to have a TLS [Transport Layer Security] certificate on each client. This ensures that a stolen password doesn’t compromise the entire network. But this is a very complex deployment mode and is not for the “masses.”

The masses on the other hand still have old access points and routers that may not be upgradeable to these new standards. WEP is so thoroughly broken that it is almost trivial to crack, especially with the use of pre-hashed dictionary attacks. So we still have to get a generation of legacy equipment upgraded.

Enterprise environments can use the new WPA2 standards and built relatively secure networks. However, security is always a moving target. Eventually, flaws will be discovered and a new approach will be necessary. So the question then becomes: Is your wireless infrastructure flexible and software upgradeable, or will you have to rip it all out and start again from scratch.

Q: Why should attendees attend the application and content security track in particular?

A: This track will provide a wealth of information on emerging threats and security paradigms and best practices. Nemertes IT benchmarking methodology helps us select the topics that are front-of-mind for enterprise IT: How best to protect critical content, applications, and data in the face of ever-increasing threats and vastly increased stakes.

Q: What will be the key take-aways for attendees to the application and content security track?

A: Attendees to this track will come up to speed on the critical issues involved in enabling application security; uncover best practices; and gain insight into next-generation technologies enabling effective application and content security.