Two researchers with BAE Systems’ Adaptive Reasoning Technologies Group have taken home a $25,000 prize for developing an algorithm that can help detect who's trustworthy and who isn't.
The algorithm – known as JEDI MIND--was developed as part of crowdsourcing challenge that took place between nearly 40 competitors backed by The Office of the Director of National Intelligence and its Intelligence Advanced Research Project Activity (IARPA) group.
+More on Network World: Mars gets close encounter with a comet+
JEDI MIND, which really stands for “Joint Estimation of Deception Intent via Multisource Integration of Neuropsychological Discriminators” uses a combination of what IARPA called innovative statistical techniques to improve “trustworthiness” predictions approximately 15% over the baseline analysis.
The BAE researchers, Troy Lau and Scott Kuzdeba found that someone’s heart rate and reaction time were among the most useful signals for predicting how likely their partner was to keep a promise. The team’s combination of focused expertise with broader interdisciplinary interests helped them to address the complexities of the challenge—while both have experience with computational neuroscience, Lau is a Ph.D. physicist with a background in data mining and finance, and Kuzdeba is a research engineer with experience in statistical learning, various engineering applications, and economics, according to IARPA.
+More on Network World: 13 cool high-tech prize competitions+
Predicting one person’s trustworthiness from another’s signals is a difficult task, IARPA stated and the Investigating Novel Statistical Techniques to Identify Neurophysiological Correlates of Trustworthiness (INSTINCT) Challenge demonstrated that fact.
The INSTINCT challenge asked researchers to develop algorithms that improve predictions of trustworthiness, using neural, physiological, and behavioral data recorded during experiments in which volunteers made high-stakes promises and chose whether or not to keep them.
The IARPA challenge specifically wanted to develop software algorithms that could detect, measure, and validate "useful" trustworthy signals in order to more accurately assess another's trustworthiness in a particular context, IARPA stated. Improving the accuracy of judgments about whom can be trusted and under what conditions could have profound implications for not just the Intelligence Community, but society in general, the group stated.
There are many potential applications for analytic and algorithmic techniques such as those under JEDI MIND, ranging from security clearance processes to gauging the trustworthiness of intelligence agents, analysts or potentially captured adversaries.
Check out these other hot stories: