The National Science Foundation is furthering its search for highly interpretive technology to help all manner of government and private researchers evaluate the massive amounts of data generated in health care, computational biology, security and other fields.
In a nutshell, the NSF said it is seeking mathematical and computational algorithms and techniques that will fundamentally improve law enforcement and the intelligence communities' ability to transform large, often streaming data sets, e-mails, images, numbers and sounds into a form that better supports visualization and analytic reasoning, NSF stated. To enable visual-based data exploration, it is necessary to discover new algorithms that will represent and transform all types of digital data into mathematical formulations and computational models that will subsequently enable efficient, effective visualization and analytic reasoning techniques, the NSF stated.
Analyzing these massive and complex data sets is essential to achieve new discoveries, but extremely difficult, the NSF stated.
The potentially controversial dark side if this research is that the NSF is working hand-in-hand with the Department of Homeland Security (DHS) to develop some of this technology. Obviously interpreting data from potential terrorist organizations and the like would be within its purview but when you see mention of healthcare and biological data interpretation interest in the same sentence as DHS, hackles go up.
This latest round of research is part of a five-year, $3 million project known as the Foundation on Data Analysis and Visual Analytics (FODAVA) research initiative lead by the Georgia Institute of Technology. DHS and NSF anointed in August Georgia Tech-led to establish FODAVA as a distinct research field and build a community of top-quality researchers that will collaborate on research workshops and conferences, industry engagement and technology transfer.
One example of the FODAVA programs is a Georgia Tech system known as Jigsaw helps analysts better assess, analyze and make sense of large document collections. The system provides multiple coordinated views to show connections between entities extracted from a document collection, the university said.
Interpreting data is at the root of recently announced artificial intelligence (AI) research. The Defense Advanced Research Projects Agency (DARPA) said it wants to develop software known as a Machine Reading Program (MRP) that can capture knowledge from naturally occurring text and transform it into the formal representations used by AI reasoning systems.
For example, all of the text in the World Wide Web will become available for automating the monitoring and analysis of technological and political activities of nations; plans, rhetoric, and activities of transnational organizations; and scientific discovery within various disciplines, DARPA stated. As digitized text from library books world wide becomes available, new avenues of cultural awareness and historical research will be enabled. With truly general techniques for effectively handling the incompatibilities between natural language and the language of formal inference, a system could, in principal, be constructed that maps between natural and formal languages in any subject domain, DARPA said.
Layer 8 in a box
Check out these other hot stories: