How edge compute is making cameras 'conscious'

Edge computing and on-chip processing get past challenges around massive data generation and poor image quality and bring exciting new capabilities

How edge compute is making cameras 'conscious'
Free-Photos (CC0)

Cameras used to be more about the lenses and the aperture, but today’s security and sportscams are building in technology to make them “conscious,” using edge computing and on-chip processing to get past challenges around massive data generation and poor image quality and bringing exciting new capabilities, such as facial and object recognition, in the camera itself.

Take the example of Silk Labs, a company using intelligent real-time video to alert people when a package has been delivered at the home door or alert them about a stranger at the door. Or consider Knit, a camera that can see how well your baby is sleeping, inform parents about her sleep trends and monitor breathing for peace of mind during the night—all without any devices attached to the child. And more will come. 

+ Also on Network World: Facial recognition to kick in for 2017 +

This is the potential of edge computing—the ability to make the electronic devices around us tremendously capable using powerful processing and connectivity technologies.

Think about the “edge” as the counterpart to the “cloud.” If the cloud is the collection of data centers and servers that consolidate processes and information for the internet, the edge is made up of all those devices you have with you, in your hands, on your wrist, in your home and car, and more.

In a not-so-distant future, billions of edge devices, including smart cameras, will analyze data in real time, taking advantage of advance capabilities such as machine learning to understand us better—and give context to all that data around us. We envision a world in which our devices know us well enough to discover what matters to us and to seamlessly give us the precise amount of information we need, when we want it.

In most cases we expect these innovations to be spearheaded by smartphones, the most popular edge device, and then quickly spread into other gadgets, such as smartwatches, home appliances, drones, vehicles and connected cameras.

Conscious cameras send less data to the cloud, offer faster response times

Basic cameras respond to motion and send a feed to the cloud or a hard drive for a pre-determined length of time. These files can become massive quite quickly, with a big cost in data transmission and storage. And without intelligence, there is no way to tell if the triggering event was a squirrel running across a wire in front of the camera or a couple of felons with crowbars entering your facility. For those watching the videos, it’s often too much, and they stop paying attention.

The conscious camera does analytics on the camera itself, taking advantage of processing power to make “decisions” right there at the edge. Object detection and facial recognition are just some of the capabilities now becoming available in cameras. Rather than sending you every image of everything that moves in front of your camera, you can "teach" it to recognize your family members or pets, for instance, and trigger only when it sees someone who doesn’t fit that profile. Audio recognition listens for strange sounds (breaking glass, voices) and compares that to the footage as well.

This ability to classify multiple objects is valuable in terms of time. But also a lot less data streaming through the “pipes” saves money in data storage and transmission.

Security built into the hardware, rather than software, ensures the camera feeds information only to you or your intended devices.

Other edge compute technologies help achieve better image quality, even in challenging conditions. Imagine you’re taking a picture of your child who is standing inside against a sliding door and the sun is shining brightly outside. Most cameras give you the option of either focusing on the person and washing out the background or washing out the person. In a security situation, it's obviously a problem if you can’t recognize the person.

Technologies such as staggered HDR exposes both the background and foreground and stiches them together in video, ensuring the camera and you can see the object clearly. In lowlight situations, video noise reduction reduces the dancing pixels and noise you often see. 4K ultra-high definition is also becoming commonplace in smart cameras to deliver crisp, detailed video feeds. While ultra-high resolutions quickly lead to increased storage and transmission challenges, advanced hardware-based HEVC encoding helps slash the size of the video files by half.

The future: object and facial recognition

As machine and deep learning gets better, uses with object and facial recognition will continue to expand. Cameras connected with 4G LTE will further expand the possibilities, including use in body cameras for police, fire and the military, where real-time feeds can save lives and provide valuable information. As virtual reality and augmented reality become more popular, 360-degree cameras are expanding rapidly, effectively combining the selfie with the action camera. People want both the ability to shoot this video but also to easily stream it live onto social networks such as Facebook and Periscope.

It’s an exciting time for camera technology, and as cameras get smarter, smaller and better, the opportunities only get bigger.

In fact, it’s an exciting time for all kinds of edge devices. The past 10 years has been a lot about the cloud, the Hub of the Internet. That’s not going to go away. But the next 10 years will also be about bringing connectivity, computing and content capabilities closer to you and your devices—it will be about the transformative power of edge compute.

This article is published as part of the IDG Contributor Network. Want to Join?

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Take IDG’s 2020 IT Salary Survey: You’ll provide important data and have a chance to win $500.