'Ex Machina,' here we come: A new algorithm helps computers learn the way we do

In a 'visual Turing test,' human judges couldn't tell them apart

Machine Learning Algorithms
Credit: Danqing Wang

Machine learning is all about getting computers to "understand" new concepts, but it's still a pretty inefficient process, often requiring hundreds of examples for training. That may soon change, however, thanks to new research published on Friday.

Aiming to shorten the learning process and make it more like the way humans acquire and apply new knowledge based on just a few examples, a team of researchers has developed what they call a Bayesian Program Learning framework and then used it to teach computers to identify and reproduce handwritten characters based on just a single example.

Whereas standard pattern-recognition algorithms represent concepts as configurations of pixels or collections of features, the BPL approach learns by “explaining” the data provided to the algorithm -- in this case, the sample character. Concepts are represented as probabilistic computer programs and the algorithm essentially programs itself by constructing code to produce the letter it sees. It can also capture variations in the way different people draw a given letter.

The model also “learns to learn” by using knowledge from previous concepts to speed learning on new ones, so it can use knowledge of the Latin alphabet to learn letters in the Greek alphabet more quickly, for example.

Most compelling of all is that the algorithm allowed computers to pass a sort of "visual Turing test." Specifically, the researchers asked both humans and computers to reproduce a series of handwritten characters after being shown just a single example of each; in some cases, subjects were asked to create entirely new characters in the style of those originally shown. Bottom line: human judges couldn't tell the results apart.

The researchers have applied their model to more than 1,600 types of handwritten characters in 50 writing systems, including Sanskrit, Tibetan, Gujarati and Glagolitic. They even tried it on invented characters such as those from the television series "Futurama."

A paper describing the research was published Friday in the journal Science. Its authors were Brenden Lake, a Moore-Sloan Data Science Fellow at New York University; Ruslan Salakhutdinov, an assistant professor of Computer Science at the University of Toronto; and Joshua Tenenbaum, a professor at MIT in the Department of Brain and Cognitive Sciences and the Center for Brains, Minds and Machines.

“It has been very difficult to build machines that require as little data as humans when learning a new concept,” said Salakhutdinov. “Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science.”

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Must read: Hidden Cause of Slow Internet and how to fix it
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.