Analogies help people understand things. Examples include such clarity as: hard drives are like closets, and defragmenting a hard drive is like cleaning a closet.
They’re popular, and work for humans, but scientists are now asking whether the comparisons could also work for computers as the machines take on new roles that involve learning. Scientists at Northwestern University think so. Computers, too, will learn through analogies, they believe.
Indeed, future computers are going to learn just like humans do, and that will include spontaneously using analogies to solve problems, including moral dilemmas, they say.
And it will be more effective than deep learning—the currently preferred method of teaching computers, the scientists say.
Computer deep learning has traditionally been thought of as the direction forward. That’s where machines scrutinize vast amounts of data, digesting it to get it down pat. But on the other hand, humans, in many ways, don’t learn like that. They “often learn successfully from far fewer examples,” Northwestern says in a press release. And computers should be able to do the same.
“His new cell phone is very similar to his old phone” is an example Northwestern uses. In other words, if you know what the old one looks like, a human can pretty easily guess what the new one looks like—black slab of plastic in many cases. That technique, via communication to the computer, could be used to teach the computer, too.
“Relational ability is the key to higher-order cognition,” says Dedre Gentner, a professor in Northwestern’s Weinberg College of Arts and Sciences, in the release.
That phone analogy is a simple one, but more complex analogies such as “electricity flows like water” should be able to be used, too.
Teaching computers via structure-mapping engine
The learning model is called a structure-mapping engine (SME).
“Although we share this ability with a few other species, humans greatly exceed other species in ability to represent and reason with relations,” and it should be taught to computers, too, Gentner says.
It’s a matter of getting the machine to “retrieve one of its prior stories, looking for analogous sacred values, and decide accordingly,” says Ken Forbus, professor of electrical engineering and computer science in Northwestern’s McCormick School of Engineering.
“In moral decision-making, for example, a handful of stories suffices to enable an SME-based system to learn to make decisions as people do in psychological experiments,” the release goes on to say.
“Given a new situation, the machine will try to retrieve one of its prior stories, looking for analogous sacred values, and decide accordingly,” Forbus says.
In terms of moral decision-making, stories could be introduced from multiple cultures all with different beliefs, too.
Physics can also be taught, the researchers claim. And educational, tutor-like software stands to benefit through student work comparison with teacher solutions.
And when the time comes for computers to write instead of me, for that day will surely come, I will try to teach my computer some things through analogy: “Hey computer, your vocabulary is as bad as like whatever,” I plan to say, testing its intellect. I wonder if it’ll get the joke.
This article is published as part of the IDG Contributor Network. Want to Join?