Phones will capture video continuously to help us remember

In the future, our phones might do our remembering for us. That will involve the video camera running all day and night—if the battery can be made to last.

Phones will capture video continuously to help us remember
Credit: Matt Straus

In the future, personal assistant-like smartphones will capture images of what users see, all day and every day, researchers say. And if devices can see what their owners can see, they’ll remember it and help organize their owners' lives. Add artificial intelligence (AI) to the users' camera, and the days of forgetting things are over.

But there’s a problem and that’s battery life. It’s one of the reasons devices aren’t attempting this organizational feat now, the scientists from Rice University say.

+ Also on Network World: 5 business uses for wearable technology +

Keeping a power-sucking video camera running real time 24 hours a day isn’t realistic, as anyone who has tried to even keep a phone on for that long will attest.

However, the university group’s project, called RedEye, will give “wearable computers continuous vision,” they say in their press release. Software optimization is the key to getting on-board cameras to last, says team member Robert LiKamWa, formerly of Rice.

“Existing technology would need to be about 100 times more energy-efficient for continuous vision to become commercially viable,” the release continues. The team members say they can perform the task through more efficient conversion of analog to digital—the everyday objects being captured are analog, and the resulting images within the device are digital.

“We can recognize objects, like cats, dogs, keys, phones, computers, faces, etc., without actually looking at the image itself,” LiKamWa says. “We’re just looking at the analog output from the vision sensor. We have an understanding of what’s there without having an actual image.”

And that’s how they get their energy efficiency: they capture the image only if it’s relevant—ignoring objects that aren’t. Rules are implemented as to what needs to be captured.  

How it could work

Roughly speaking, this is how it might work one day: a dog or cat needs feeding at certain times, but a home’s wall that the owner also passes by does not. Both are in view of the camera during the course of the day, but even though the inspirational canvas (the wall) might require future action—like need painting one day—it’s not relevant from the point of view of one’s daily life in need of organization. Therefore, it’s ignored.

The cat is worthy of attention from the phone and gets in the shot: It’s interpreted algorithmically, before processing, as something that needs attention. The sensor’s workload is thus reduced.

Rice’s Efficient Computing Group’s RedEye paper (PDF) was introduced this week at the International Symposium on Computer Architecture (ISCA 2016) conference in Seoul, South Korea.

“The concept is to allow our computers to assist us by showing them what we see throughout the day,” says group leader Lin Zhong, professor of electrical and computer engineering at Rice, in the release. “It would be like having a personal assistant who can remember someone you met, where you met them, what they told you, and other specific information like prices, dates and times.”

The neural technology is part of the “pervasive computing” or “ambient intelligence” genre of machine learning and AI. “It centers on technology that can recognize and even anticipate what someone needs and provide it right away,” the press release says.

“Vision and sound will be the initial sensory inputs,” Zhong says. “Smell, taste and touch may come later.”

This article is published as part of the IDG Contributor Network. Want to Join?

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Must read: Hidden Cause of Slow Internet and how to fix it
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.