IT security: What's hot, what's not

From the most innovative technologies to the smartest strategies, four security experts share their insights in a roundtable chat

Four security experts share their insights on the latest security technologies, including data-loss prevention, fingerprint readers and OpenID.

NW Chat

Data-loss prevention, fingerprint readers, OpenID, reputation services -- diverse though they might be, these security technologies are top attention-getters among the four gurus we recently gathered (virtually, that is) for a roundtable chat about enterprise security. Our experts -- Network World columnists and bloggers Andreas Antonopoulos, Jamey Heary, Dave Kearns and Noah Schiffman -- also throw out their opinions on whether we'll ever break the patch-hack-patch cycle, the true meaning of defense-in-depth enterprise security, and how social networking might affect identity management.

Moderator -- Beth: Hello and welcome. We're going to dive right into our first question, so here goes: What is the most innovative security technology you've seen in the last year or so, and why?


See our complete chat schedule and an archive of chats on topics including IPv6, social networking and wireless-LAN management.


Noah_Schiffman: DLP is one of the better technologies I've seen this year, as security measures need to be instituted internally more so now than ever before.

Dave_Kearns: Ubiquitous fingerprint readers (for example, eikon). I've been following biometrics and specifically, fingerprint technology for the past 10 to 12 years. Each time I think it's about to take off, the sizzle turns to a fizzle once again. But now the time might be right. Not that biometrics are any more acceptable (even though they are), nor that the accuracy has improved (even though it has), but because the right application has come along.

Andreas_Antonopoulos: The most innovative security technology is the development and quite broad adoption of OpenID -- an open, decentralized, free framework for user-centric digital identity. What's so interesting about OpenID is that it is completely decentralized and allows an individual to maintain one or more independent IDs, of varying levels of security. It allows owners of Web-based applications, services and sites to authenticate the users without forcing them to create yet another user ID.

In fact, I can see a potential synergy between OpenID, back ends and Dave's ubiquitous fingerprint readers.

Dave_Kearns: You're the first person I've heard mention OpenID and security in the same sentence in a positive way -- is everyone else wrong?

Jamey_Heary:  I'm the most excited about the explosion of security companies that are integrating reputation-based controls into their products. Reputation scoring represents the evolution of the traditional whitelist/blacklist approach used in URL filtering and antispam solutions. Classification based on reputation provides you with far more visibility, granularity and control over your traffic-security policies. As reputation matures, I hope to see it moving into other security products like firewalls, [intrusion-prevention systems] and host security clients. The power of being able to classify and control traffic based on its reputation should turn out to be a game-changer for security.

Andreas_Antonopoulos: I think people see OpenID as a means of authentication rather than a framework. The nice thing about it is that it is a framework that can provide really loose authentication and really strong authentication depending on the back end. It does not inherently tie into a specific level or strength of authentication, so its biggest weakness is also its strength.

Noah_Schiffman: But [OpenID] ultimately will be determined by the scale in which it is adopted.

Jamey_Heary: I see a synergy between OpenID and reputation as well. And adding in a reputation score that is earned by the ID holder can give you another type of authorization criteria.

Andreas_Antonopoulos: Exactly! Great synergy.

Dave_Kearns: There are still the problems of untrusted OPs [OpenID identity providers] -- and phishing. As to reputation -- there's great promise but still no tried-and-true way to gather the reputation data. Information Cards [the identification specification developed by Microsoft], to my mind, offers much more promise than OpenID.

Jamey_Heary: And the trouble with slander for reputation. There are several reputation databases out there for URLs and spam, but not yet for individual identity. However, if a few social community sites like Second Life, Facebook and so forth picked up [on the idea], a reputation database for individuals could be an created very quickly.

Andreas_Antonopoulos: I see OpenID as the type of solution that succeeds where more complex solutions failed -- that is, LDAP vs. X.500, SMTP vs. X.400, TCP/IP vs. OSI.

Jamey, because it is difficult to have a [Uniform Resource Identifier] for identity, to build reputation around it. OpenID offers that URI.

Jamey_Heary: Right, OpenID could take the place of my other idea, which is a digital certificate for everyone.

Moderator -- Julie: Let's talk about the fingerprint readers Dave mentioned early on in this discussion. Anyone else excited about those?

Andreas_Antonopoulos: I'm very excited. In fact, the first thing I thought when I saw the form factor of the fingerprint readers on laptops was 'Hey! you could put that on the bottom edge of a cell phone.'

Dave_Kearns: Or on a smartcard.

Jamey_Heary: For biometric, I prefer voice analyzers, given that most devices already have microphones. The problem with fingerprint readers is they are not ubiquitous enough yet.

Noah_Schiffman: I'm an expert on biometric authentication techniques. I've designed many biometric authentication systems and worked on biometry research. One of the problems with fingerprint authentication is the frequency of 2-D imagery that's used. True biometric authentication would be accomplished with full 3-D analysis of all the fingerprint ridges in terms of depth and width.

Andreas_Antonopoulos: We've adopted fingerprint authentication as a second factor on all our machines. We've found an interesting gender bias. They don't work as well for women, in general. Slimmer fingers create smaller "pad" surfaces to scan. Add to that the use of hand creams, which is arguably more common for women, and they have many more errors. Noah, does 3-D imaging solve the gender bias problem?

Noah_Schiffman: It depends on a pressed vs. swiped read, but in general, lotion, grease or anything on top of the finger is going to produce invalid results.

Moderator -- Julie: How do you swipe a fingerprint?

Noah_Schiffman: Swiping is the motion of moving the finger across a thin scanning aperture.

Andreas_Antonopoulos: Swiped has the advantage of form factor, but suffers from this issue. So ladies: Dry hands or successful authentication -- your choice.

Noah_Schiffman: Totally right.

Andreas_Antonopoulos: Still though, I'd like to see fingerprint readers on cell phones. Then you can make the cell phone carry a soft token.

Dave_Kearns: I'm told that cell-phone geography changes too often to add fingerprint readers to most of them.

Moderator -- Julie: We still need to get feedback on DLP, which Noah brought up early on, and reputation services as hot security technologies. Thoughts on DLP?

Andreas_Antonopoulos: DLP is great, but more as awareness-training than true barrier. Determined users can get stuff out, but it certainly helps educate those who accidentally violate policies. It also shows IT where they need to provide better mechanisms for data transfer.

Jamey_Heary: DLP is having a hard time taking off, given its complexity. There is still not a good solution out there.

Noah_Schiffman: As internal threats and corporate data loss have surpassed that of the external threats posed by hackers, DLP has become very relevant and important. But, yeah, I agree DLP is slow to take off.

Moderator -- Julie: Does DLP technology work today? What needs to improve?

Andreas_Antonopoulos: It's easy enough to do the basic pattern scanning -- that is, 999-99-9999 is an SSN. But . . . figuring out that the minutes from the board meeting on the fire sale of the company is sensitive?

Jamey_Heary: First, companies have to understand the type of data they have. Then they have to figure out how to classify that data. Today this is mostly a manual, very time-intensive process.

Andreas_Antonopoulos: Right! Most DLP discussions start with " . . . and then we stop it from leaking," but the real problem is finding and classifying. If you already know what is sensitive and where it is, it's easier.

Noah_Schiffman: The thing that needs to improve regardless of security systems is the constant weakest link of the end user. Risk assessment, data classification and other forms of data valuation need to be in place before DLP can be effective.

Moderator -- Julie: DLP: It's the old structured vs. unstructured data issue -- and this won't be solvable until we change human nature.

Dave_Kearns: Correct, Julie.

Jamey_Heary: We need a DLP-classification engine.

Andreas_Antonopoulos: Which is why companies that have intellectual property in document and content management and natural-language processing could end up grabbing the DLP market.

Noah_Schiffman: DLP classification currently is extremely difficult, even for the government.

Dave_Kearns: Everything is difficult for the government . . .

Noah_Schiffman: As a DoD consultant, I've seen this many times. The number of classified documents is unknown because there is no requirement to account for most of them; however, some estimates put their numbers in the hundreds of millions.

Andreas_Antonopoulos: In a way, we are replicating the same approach as we did in the early days of spam -- signatures and grep, and it only goes so far. Where's the Baysian DLP? Where's the context-aware DLP?

Moderator -- Julie: Because Baysian has worked so well for spam, which brings us to reputation services -- your thoughts?

Andreas_Antonopoulos: Reputation is great. It's dynamic and adaptive, and can be used in many different areas of security.

Moderator -- Julie: Is it easy to fake a good reputation, or easy to classify some domain as bad (and it isn't)?

Noah_Schiffman: Yeah, making a blacklist into a whitelist or vice versa is not difficult for a talented hacker.

Dave_Kearns: Reputation would be great if there was agreement on the meaning, agreement on aggregation, agreement on ‘edit-ability,’ and so forth.

Andreas_Antonopoulos: Or if not agreement, then at least common protocols for describing those attributes so that you can exchange them.

Jamey_Heary: I haven't seen these issues. Dave, can you elaborate?

Dave_Kearns: Well, Jamey, show me where reputation data is aggregated, and who gets to edit it.

Andreas_Antonopoulos: Not if the reputation is distributed and not centralized.

Jamey_Heary: Today each company builds and controls its own reputation database. We need a public reputation infrastructure, similar to what we have with PKI today.

Andreas_Antonopoulos: Yes, we need standards (XML) for describing and exchanging reputation data -- federated reputation, basically.

Jamey_Heary: Exactly.

Andreas_Antonopoulos: Funnily enough, it's [peer-to-peer] where you see that developing.

Jamey_Heary: I'd like to see reputation brought into firewalls, and IPS and host security clients as well.

Andreas_Antonopoulos: Well, we should separate three concepts: reputation scoring, reputation storing and reputation users; and all three should be application independent.

Noah_Schiffman: That sounds pretty complex.

Moderator -- Julie: OK, time is short. Let's move on to Question 2. What do you think is the ultimate solution to end the patch-hack-patch cycle that is the cornerstone of today's enterprise security?

Andreas_Antonopoulos: The biggest problem with the patch-hack-patch cycle is that patching is not an issue of scale (hard-to-patch lots of computers) but one of unintended consequences (dangerous-to-patch critical computers). As a result, companies quite happily bulk-patch all their desktops but leave critical servers unpatched while they exhaustively test the patches. An enterprise has to balance the risk of a known exploit in the wild vs. the possibility of a conflict causing a critical server to crash. Since many exploits can be mitigated through other means (perimeter firewalls, application firewalls, proxies, patch proxies) the balance almost always leans against patching. Virtualization allows you to do high-fidelity testing on a clone. High-fidelity testing means that the clone is indistinguishable in every way from the original, so if the patch works, it will work on the critical server too. Pre-virtualization you could do that only by building a complete replica of a production environment. Even servers from the same batch might have different chipsets, different BIOS, different [network interface cards]. Accurate testing was difficult and costly.

1 2 3 Page 1
Page 1 of 3
Now read: Getting grounded in IoT