Security reality vs. feelings: Steinberger on Schneier

Friend and colleague Professor Ric Steinberger, CISSP is back today with an interesting analysis of a security lecture he watched recently. What follows is Ric's work with minor edits.

* * *

Bruce Schneier recently presented an excellent brief lecture as part of the TED series. One of his core ideas is that there are two basic ways we think about security: reality and feeling. Reality refers to what we objectively know about the risks of a particular activity (for example, flying on a commercial aircraft). Feeling refers to how we feel about that activity (such as how worried we are that the plane could crash, be shot down or hijacked).

Schneier points out that as long as our feelings about security closely reflect the underlying security reality, then we are able to make reasonable judgments and enact appropriate policies. He's speaking mostly about events of significant national concern (such as air travel, threats of terrorism, poisoning of municipal water supplies, swine flu [H1N1]). A phrase he has popularized is, "security theater." This would be what happens when security policies are developed based far more on the public's perception, i.e., feelings, about security than on the actual risks. A good example of security theater is the occasional presence of armed National Guard troops at the nation's airports. The reality is that such a display does little to deter any actual terrorists, but it seems to reassure the public that their government is "concerned" and "responsive".

Consider the following possibilities, where qualitative known risk (risk reality), and qualitative sense of risk (feelings), both vary from low to high:

Case Reality Feelings Resulting Policies

=============================================

1 Low Low Appropriate

2 High High Appropriate

3 Low High Paranoid

4 High Low Delusional

In cases 1 and 2, there's a good chance that because feelings reflect reality, the resulting security policies will be appropriate, accepted and to a large extent, followed:

• In case 1 (low risk), this means that the policy requirements are modest and the effects on people are tolerable.

• In case 2 (high risk), people understand the existence of serious risks, and the resulting strict policy is understood and accepted.

Things aren't so rational in the other two cases:

• In case 3, we have a kind of paranoia: The risk is objectively low, but the public's (or the organization's) assessment of risk is (wrongly) high. Example: In the summer of 1975, the movie Jaws is released. Beachgoers across the country are afraid to go in the water. The public has persuaded itself that everyone in the ocean is at serious risk, even though there were no more shark attacks than before the movie's release.Text messaging while driving – there's a documented greater risk of serious accident when texting while driving, but large numbers of people continue to do so, falsely believing that the risk of their being involved in an accident is low.

• In case 4, we have a kind of delusion: There is a very real, high level of serious risk, but it goes largely ignored because almost no one affected believes it. Example:

Problems occur whenever security reality and security feelings are not closely aligned. This can happen both at the national level and within commercial organizations. Consider a common case: The information assurance (IA) staff have concluded that certain risks are medium to high and have proposed policy changes and/or infrastructure changes to address them. Changing policy or procuring or upgrading infrastructure generally costs money and requires resources. Senior management may not be convinced this is necessary: their feeling is that the described risks are low. A disconnect has occurred.

What can IA staff do when this happens?

The first step is to recognize what may be going on: We think there's a medium to high level risk, and they don’t agree. Before moving on to what step two might be, IA needs to break out of the we/they framework and realize that everyone is on the same team, even though everyone doesn't always agree on what the next play should be. So let's assume that this conceptual block is overcome: IA updates its presentation and its language and management agrees with the risk evaluation.

Step two is to recheck the risk assessment. If IA is going to promote its views as reality, there better be some objective basis. Has a third-party assessment been performed? Has internal and/or external audit weighed in? Has the human resources group been consulted? Has the legal department been asked about the relevant regulations? Have prior security incidents been thoroughly understood? Has a more quantitative approach to risk assessment been considered? Do other companies in the same industry also assess the risk as medium to high?

Steps one and two are normally performed by IA staff, led by the chief information security officer (CISO) or nearest equivalent.

Step three concerns dealing with senior management feelings by getting answers to these questions:

• Case A: Is the lack of positive response by management based on known lack of resources, primarily money? Or

• Case B: Is it based on a genuine lack of understanding of the underlying objective risks?

In case A, it may be that the best IA can hope for is a recognition by management that when the financial condition improves, the organization will be better prepared to go forward with the security recommendations.  The security group should come up with the best and most affordable interim plan possible.

In case B, IA attempts to change management's perceptions (feelings) about security risks – one of the hardest tasks that a CISO faces. It requires patience, perseverance, understanding, a willingness to use management's language (e.g., assets, brand, compliance) instead of security language (e.g., denial of service, firewall, zombie), and the ability to cultivate alliances and even call in favors. There's obviously no guarantee of success, but by understanding the relationship between security reality and security feelings, IA staff can better assess the nature of the conflicts they face as they work to improve their organization’s information assurance posture.

[MK adds: in the area of social psychology involving studies of understanding (cognition), we refer to feelings as beliefs and affect. For more about social cognition, see the chapter on "Social Psychology and INFOSEC: Psycho-Social Factors in the Implementation of Information Security Policy" from the Computer Security Handbook, Fifth Edition.]

* * *

Ric Steinberger, CISSP, is an experienced network security consultant and a long-standing and highly-respected instructor in Norwich University's MSIA program.  He is also helping manage a company focused on iPhone applications.

Learn more about this topic

Complexity of IT systems will be our undoing

SMB security: 5 bright ideas

Best practices for endpoint security, Part 1

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Copyright © 2010 IDG Communications, Inc.