• United States

Tips for evaluating security training, Part 1

Mar 23, 20043 mins

* How to go about evaluations of security training

At a recent conference, Roger Quane of the National Security Agency in the U.S. Department of Defense presented a stimulating lecture on “Evaluation Activities: Management’s Nightmare or Dream Come True.”

Speaking at the March 2004 Annual Conference of the Federal Information Systems Security Educators’ Association (FISSEA) at the University of Maryland campus, Quane pointed out that evaluation metrics for training programs can be ranked by difficulty as follows:

* Reaction – how participants like the program.

* Learning – what the participants can show they have learned.

* Application – how the participants apply their knowledge in their work.

* Business impact – the effects on how the organization runs its operations.

* ROI – monetary measures of benefit divided by costs required to achieve.

The role of the manager is critical; the manager leads the project and is responsible for its success. To avoid conflict of interest, the manager should have someone else evaluate the program.

Such evaluations are often needed to see if the training program is justified or worthwhile; unfortunately, says Quane, sometimes managers are confronted with a stark choice between honesty and what may seem like professional survival. Regardless of the apparent danger, honesty is the only policy that makes sense. If a program hasn’t worked out, it’s important to say so and take the consequences (“falling on one’s sword” in Quane’s description). In reality, such honest self-appraisal is rarely treated as grounds for dismissal.

In general, Quane recommends that formal evaluations not be applied to projects that cost less than $100,000. In addition, the projects should have organizational visibility, must be needed for organizational success, should be relatively new, and should be requirements-driven (i.e., not simply done because everyone has to do it).

Not every training project should be evaluated, says Quane, but it is essential that evaluations be carried out in sequence. First you have to collect information about participation reactions, then you can study how well they learned, and only after that should you look at behavior in the workplace. Each of these levels is more complex and more expensive to measure than the previous one. Measuring effects of training on security is much more complex and will be the subject of a separate article.

Reminder: Robert Gezelter, author of last Tuesday’s Security Newsletter, will be presenting a session entitled “Internet Dial Tones & Firewalls: One Policy Does Not Fit All,” at two Central Florida IEEE Computer Society chapter meetings this week. Click here ( for details about the session in Tampa on Wednesday, and here ( for the event in Orlando on Thursday.