Carnegie Mellon University has been running a computer cluster since July that scans the web for images in order to make sense out of them. The project, dubbed Never Ending Image Learner, could pave the way for computers that better understand the visual world in ways that humans often take for granted.
You can check out NEIL in action here, as I did to see what sense it had made of Batman pictures (it learned that the Joker can kind of look like Batman).
The project, funded by Google and the Office of Naval Research, runs on 2 clusters of computers comprising 200 cores that are building what researchers hope will be the world's largest visual knowledge base. They're building a database that makes connections between images to better understand them (such as that cars are typically found on roads and that pink doesn't necessarily refer to the singer of that name). NEIL has plowed through some 3 million images and has identified thousands of objects and scenes, and as a result, relationships.
The cluster isn't doing all the work on its own, as research Abhinav Shrivastava says humans might not always know what to teach computers but they "are good at telling computers when they are wrong."
It's not surprising that Google is putting funds behind this project given that image search is a focus for the company, which recently added search by image to its Chrome web browser.