Advanced technologies that are ready to make AR a consumer reality

A keynote by augmented reality pioneer Tobias Höllerer and a discussion with futurist Robert Scoble predict AR is ready to explode

Advanced technologies that are ready to make AR a consumer reality
Credit: Alvin de Castro Chan via Flickr (CC-BY)

Augmented reality (AR) is on the verge of entering the mainstream. Apple is preparing to introduce an AR product, not because it invented AR, but because the technology, long under investigation by academic researchers and Google is ready.

Commercial prototypes Microsoft Hololens and Google Tango already have publicly demonstrated the potential of and have created enthusiasm for AR.

But consumer electronics products we use every day are rarely invented and materialize right away on retails’ shelves. Often, well-understood technologies like AR, virtual reality (VR) and artificial intelligence (AI) have been proven in theory and built as prototypes in researchers labs but await practical applications and cheap hardware.

ar prototype columbia university Columbia University

Early AR prototypes were carried in a backpack

University of California Santa Barbara professor Tobias Höllerer has been researching AR and building prototypes for a long time, beginning as a Columbia graduate student in the late 1990s. Höllerer’s work resembles today’s commercial AR prototypes like the Hololens and the Meta 2, only with lower resolution and the prototypes were carried in a backpack. His prototype did not project holograms of industrial parts, surgical scenes or dinosaurs into reality, but it labeled reality in the Manhattan vicinity near Columbia with information like the names of buildings and professors’ offices.

Social AR is ready now

At last week’s IEEE VR conference in Los Angeles, Höllerer spoke about the social influence that could take AR over the commercial goal line. He proposed that one person’s data or AR bread crumbs that contextually label reality should help the next person. He used the example of Facebook posts and comments labeling reality, creating tighter connections applied to AR. Navigational apps such as Waze and Google Maps help to conceptualize his idea, but instead of an annotated two-dimensional map, the observer’s three-dimensional view includes crowdsourced information. 

Höllerer evoked a vision of more meaningful social media annotations, public posts, and posts by friends augmenting the scene overlaid on reality. 

ar vision columbia university Columbia University

Scoble confirms AR needs social data

Futurist, Robert Scoble’s AR career is much shorter than Höllerer's, but he made up for it with his frenetic passion for meeting with and writing about early-stage companies in the field and his recent book on the subject co-authored with Shel Israel, The Fourth Transformation. Scoble started the discussion about AR glasses with the statement, “I expect that a decade from now we'll mostly be using Android on our faces … because Google has better AI and even more engineers and, especially, much more data about us than Apple has.” He went on to say Google’s “Tango shows how AR can be done on phones, and soon, glasses.”

I responded to Scoble: “Tango is just a prototype to spin the software and hardware to make them cheap and power efficient and get them in every mobile phone to get the social effect of everyone annotating reality for the next person. Think about the software and hardware development time series of the improvement in GPS.”

Though Apple was at the center of the media’s stage last week because of a Bloomberg report about Apple’s developing AR program, the company has not announced or confirmed a product and other than Tim Cook’s expression of interest in AR. Google, meanwhile, has been preparing for AR for more than four years with its Project Tango Tablet and is much better positioned to monetize AR than Apple.

evolution of ar Steven Max Patterson

3D cameras evolving fast, will evolve like GPS as a standard smartphone feature

Four years ago I first used a Google Tango prototype at the MIT Media Lab. The battery life was terrible, and it ran almost hot enough to burn. But it did something amazing: It captured a human perception of the three-dimensional space understood by a computer.

Six months later, Google announced the Tango Development kit, a 7-in. tablet with improved, but not great, battery life and fewer bugs. It captured the interest of developers and academic and industry researchers.

Last summer, Lenovo introduced the Phab2 with a new lower-cost, lower-power, Tango-platform camera in a phablet form factor. It has improved battery life and a smooth sleek exterior without the ugly bulges to house special Tango cameras and sensors like previous versions and even more stable software. 

Home goods retailer Wayfair’s app explains how AR works in 35 seconds:

Asus introduced the Zenfone AR at CES, which probably has a new spin of the Tango platform cameras and sensors with more accuracy, lower cost and better battery performance—like the components used to build the Phab 2 Pro. It is hard to be certain about the Tango camera components until the Zenphone AR ships.

Google’s Tango camera, software and sensor assembly are well positioned to win design-ins in smartphones because the design prototypes have been iterated three of four times. But if it is not this camera, it will be another. The important point, though, is AR features will become invisible, expected and anti-climactic—like GPS is now.

My first GPS smartphone was the Galaxy Nexus. Its accuracy was impressive, but it plugged into my car’s power, the battery drained faster than it recharged, and arriving at my destination, I would shut GPS down to conserve the battery. Today, no one turns off their GPS, and apps like the camera or Facebook can simply use location. Smartphone photos have EXIF metadata by default that include, among other details, longitude and latitude.

The data exists now to socially annotate reality using AI

Trillions of photos stored on Facebook, Instagram and Google photos and maps have been processed by machine learning systems that tag the photos with details even when the poster has not tagged them. Reddit, Quora and thousands of sites have similar or complementary information. At the Google Tensorflow summit, Jeff Dean of Google Brain told me Google consumed one-third of the internet to train its language translation machine learning models. There are no obstacles to Facebook and Google from doing the same to socially annotate reality.

Imagine instead of the two-dimentional search for cultural events in San Francisco below with Google Maps, the user simply points their smartphone camera at the scene ahead and looking through the screen, sees 3-dimentional reality overlaid with annotations like Höllerer’s prototype—but with the personalized relevance of social information of public and friends’ Facebook posts, Quora and Tripadvisor, Google recommendations, etc.

google maps Google

Though the ultimate AR form factor will be a light, unobtrusive headset, it will first reach consumers on smartphones. That’s because the cost of adding a three-dimentional camera assembly to a smartphone is negligible, and there are millions of mobile developers who can start building apps that use human perception of three-dimentional space that one monolithic company like Apple would find hard to imagine.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Must read: 10 new UI features coming to Windows 10