AMiloration of security: Milo and future hacking

* Virtual reality, such as in Second Life, could help in other areas of real life, but use caution.

Every year, the Master of Science in Information Assurance (MSIA) program at Norwich University hosts the annual three-day Graduate Security Conference for our graduating classes. We always have a plenary session with a distinguished keynote speaker; this year we were honored to welcome well-known antimalware researcher Dr. Richard Ford, Research Professor at the Center for Information Assurance of the Florida Institute of Technology. Dr. Ford spoke about unintended consequences in security in a riveting and highly stimulating presentation which, at my request, included no PowerPoint slides.

One example Ford brought up in his lecture is Milo, a project at Microsoft to put an interactive artificial-intelligence avatar running on the XBOX3 NATAL experimental platform.

In a five-minute lecture and demonstration, we see a young woman interacting with the avatar of a young boy in a virtual world. The avatar not only displays emotional responses through its face, body and voice, but is represented as recognizing its interlocutor's emotional responses through its analysis of visual input through the system's camera and analysis of the human being's voice patterns. The avatar also represents internal emotional states through its generated movements and voice. The interface also allows a representation of data transfer between our world and the electronic world (specifically, hand motions affecting virtual water and transfer of the content of a piece of paper into a virtual paper on the other side of the digital barrier.

The system is currently being applied not only to games but also for the kind of device-free hands-operated user interface (grasp icon, pull, expand, move aside) illustrated in a demonstration of a different system at the CeBIT exposition in 2008 and that we have seen in science fiction movies such as "Minority Report".

In discussion after Ford's talk, I pointed out that there are fundamental and fascinating issues of security inherent in applications of such a system. Let's start by imagining some of the additional applications of artificial intelligence with this degree of interactivity:

• The avatars and child users could form strong emotional bonds that could be helpful in encouraging learning and prosocial behavior.synthetic actress in the movie "S1M0NE."ELIZA on steroids) could provide inexpensive, pervasive support for mentally ill people, including social modeling much as Second Life interactions are currently being studied as a method of improving social skills for participants.virtual butlers) with warm, supportive and responsive personalities could become important sources of support for disabled or elderly people.

• It could be used for interactive movies of enormous emotional vibrancy, going beyond the power of the

• News organizations could define virtual news anchors and perhaps interviewers who would charm viewers into strong loyalty for their programs and networks.

• Advanced virtual therapists (

• Intelligent agents (

• Virtual personalities could be intermediaries for rapid response in emergencies, augmenting the capabilities of 9-1-1 networks with instant response, infinite patience and calming personalities.

Sound wonderful?

It won't be so wonderful if the designers and engineers fail to integrate security considerations into these systems from the very start. Using the same sequence of bullet points as those presented above, here are some nightmare scenarios involving poorly secured, vulnerable Milo-like systems:

• A malfunctioning or subverted avatar could deeply pain a child who has cathected on the avatar.pedophiles.virtual employees.

• The Milo-like avatars could be used to create child pornography at a level of detail and emotional significance that might have significant and dangerous effects on

• Criminal hackers might break into communications interfaces between avatars and users, causing havoc with theft of confidential information, distortions, introduction of offensive materials, and denial of service.

• Freed from the constraints of even the apparently minimal moral standards of some TV personalities, news organizations with a political agenda could be completely untrammeled by issues of accuracy, fairness or completeness in pursuit of propaganda goals by dictating the performance of even more obedient

• Virtual psychotherapists under the control of criminals and pranksters could play havoc with the psychology of disturbed patients, including incitement to self-hurt or violence.

• Malfunctioning virtual butlers could become the source of living nightmares for helpless disabled people at the mercy of the hackers playing with their lives through code and parameter modifications.

• Virtual emergency operators could be led astray by terrorists who could tamper with the operations of the artificial intelligences, leading to dropped calls, misdirected responders and catastrophic consequences in real emergencies.

I'm not saying that the technology is bad, shouldn't be pursued or is the agent of demons. I'm saying that we had better be designing security in from the get-go. I just don't want to end up with HAL from Arthur C. Clarke's 2001 in our living rooms.

* * *

Join me online for three courses in July and August 2009 under the auspices of Security University. We will be meeting via conference call on Saturdays and Sundays for six hours each day and then for three hours in the evenings of Monday through Thursday. The courses are "Introduction to IA for Non-Technical Managers," (July 18-23); "Management of IA," (Aug. 1-6); and "Cyberlaw for IA Professionals"(Aug. 8-13). Each course will have the lectures and discussions recorded and available for download – and there will be a dedicated discussion group online for participants to discuss points and questions. See you online!


Copyright © 2009 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022