What are the fundamental limitations inherent in machine learning systems?
That’s the central question of a potential new DARPA program known as the Fundamental Limits of Learning (Fun LoL) which according to the researchers will address how the quest for the ultimate learning machine can be measured and tracked in a systematic and principled way.
+More on Network World: Not dead yet: 7 of the oldest federal IT systems still wheezing away+
“It’s not easy to put the intelligence in artificial intelligence. Current machine learning techniques generally rely on huge amounts of training data, vast computational resources, and a time-consuming trial and error methodology. Even then, the process typically results in learned concepts that aren’t easily generalized to solve related problems or that can’t be leveraged to learn more complex concepts. The process of advancing machine learning could no doubt go more efficiently—but how much so? To date, very little is known about the limits of what could be achieved for a given learning problem or even how such limits might be determined,” DARPA stated.
With Fun LoL DARPA is looking for information about mathematical frameworks, architectures, and methods that would help answer questions such as:
- What are the number of examples necessary for training to achieve a given accuracy performance? (e.g., Would a training set with fewer than the 30 million moves that programmers provided to this year’s winning machine have sufficed to beat a Go grand champion? How do you know?)
- What are important trade-offs and their implications? (e.g., size, performance accuracy, processing power considerations)
- How “efficient” is a given learning algorithm for a given problem?
- How close is the expected achievable performance of a learning algorithm compared to what can be achieved at the limit?
- What are the effects of noise and error in the training data?
- What are the potential gains possible due to the statistical structure of the model generating the data?
“We’ve seen advances in machine learning and AI enabling computers to beat human champions playing games like Jeopardy, chess, and most recently Go, the ancient Chinese strategy game,” said Reza Ghanadan, DARPA program manager in a statement . “What’s lacking, however, is a fundamental theoretical framework for understanding the relationships among data, tasks, resources, and measures of performance—elements that would allow us to more efficiently teach tasks to machines and allow them to generalize their existing knowledge to new situations. With Fun LoL we’re addressing how the quest for the ultimate learning machine can be measured and tracked in a systematic and principled way.”
As it stands now with machine learning, even a small change in task often requires programmers to create an entirely new machine teaching process. “If you slightly tweak a few rules of the game Go, for example, the machine won’t be able to generalize from what it already knows. Programmers would need to start from scratch and re-load a data set on the order of tens of millions of possible moves to account for the updated rules.”
There have been a number of research efforts of late to bolster machine learning. Google for example earlier this year announced the private beta of a new Cloud Machine Learning service that lets businesses create a custom machine learning model. To do so, users work with data they have in Google's other cloud services. Cloud Machine Learning handles data ingestion and training and then uses the resulting machine learning model to make predictions.
Last year researchers with the Intelligence Advanced Research Projects Agency (IARPA) said their five-year program called Machine Intelligence from Cortical Networks (MICrONS) would offer participants a “unique opportunity to pose biological questions with the greatest potential to advance theories of neural computation and obtain answers through carefully planned experimentation and data analysis.”
+More on Network World: The hottest 3D printing projects+
“Over the course of the program, participants will use their improving understanding of the representations, transformations, and learning rules employed by the brain to create ever more capable neurally-derived machine learning algorithms. Ultimate computational goals for MICrONS include the ability to perform complex information processing tasks such as one-shot learning, unsupervised clustering, and scene parsing. Ultimately, as performers incorporate these insights into successive versions of the machine learning algorithms, they will devise solutions that can perform complex information processing tasks aiming towards human-like proficiency,” IARPA stated.
IARPA said that “despite significant progress in machine learning over the past few years, today’s state of the art algorithms are brittle and do not generalize well. In contrast, the brain is able to robustly separate and categorize signals in the presence of significant noise and non-linear transformations, and can extrapolate from single examples to entire classes of stimuli.”
+More on Network World: 26 of the craziest and scariest things the TSA has found on travelers+
IARPA said that the rate of effective knowledge transfer between neuroscience and machine learning has been slow because of differing scientific priorities, funding sources, knowledge repositories, and lexicons. As a result, very few of the ideas about neural computing that have emerged over the past few decades have been incorporated into modern machine learning algorithms.
Check out these other hot stories: