Open Source Subnet An independent Open Source community View more

Why Google is set up perfectly to build an AI messaging app

Sitting on an ocean of data will only help in the pursuit of an artificial intelligence messaging app.

Google artificial intelligence
Credit: Flickr/Robert Scoble

The unconfirmed Wall Street Journal report about Google’s project to create chatbots rings true. Chatbots, the WSJ explained, are Google’s attempt to apply its deep-learning technology into messaging apps adding the capability to respond to texted questions. Though Google did not confirm this work was underway, it makes sense that Google would follow this path.

A more human CHI

Computer-human interface, that is – not Chi as in vital force of Taoism. Chatbots are a conversational interface between humans and computers, like Google voice search that is implemented in Chrome (and other browsers) and several kinds of Android devices.

The WSJ story said that this technology would be applied to messaging so users could text questions and get answers. It implied an app-like ecosystem of chatbots created by many different developers for many specialized uses and an umbrella app that would dispatch human questions to the chatbot with the appropriate domain knowledge to best answer the questions. The story also said that Google’s motivation was to capture messaging app market share lost to Facebook Messenger, WhatsApp, and WeChat. From my point of reference, it’s a trivial task for Google to integrate existing technology into its current or future messaging app. Google has one advantage that none of the other companies have, though – large amounts of training data sets from its search business. A quick sketch of the state of artificial intelligence (AI) and machine learning today will explain what all these companies are doing and why Google has an advantage.

With Google Research, Google X, Google Brain, and its recent acquisition DeepMind are among the industry’s few large-scale commercial investments in AI and deep-learning. The AI and deep-learning community is still small, and with a couple of exceptions, like Google, Microsoft, IBM and more recently Facebook, it is almost entirely academic. A glance at the industry’s most relevant conference, called Neural Information Processing System conference, confirms its very academic essence.

Breakthroughs in GPU programming and data sets

But AI and deep learning have escaped academia and captured commercial interest. Yann LeCun, Facebook’s AI Lab chief, explained how two recent breakthroughs have improved the feasibility of developing more conversational interfaces between humans and computers when he spoke at the MIT Technology Review’s EmTech conference available on video stream. Until about two years ago, such systems required Field Programmable Gate Arrays (FPGA,) specialized processor hardware designed to accelerate these types of applications. AI developers learned to program massively parallel graphical processing units (GPU) to replace custom processors with commodity hardware, speeding up these applications. With the constraints on processing speeds lifted, the other big breakthrough, the availability of training data sets, made it possible to teach systems to learn to think and make decisions within narrow domains of knowledge without programming explicit rules.

A level open source community of competitors

Most of the AI practitioners are within three degrees of one another, and almost everyone has worked with or studied under Yann LeCun at Facebook, Google’s AI Chief Geoffry Hinton, Yoshua Bengio of the University of Montreal, and Andrew Ng, now Chief Scientist at Baidu Research and formerly of Google. And they are all building systems using similar open source libraries and algorithms, like the Torch project, that give these apps the ability to learn.

Like an infant with a brain and the innate ability to learn, these GPUs have been programmed by a community of developers sharing the same education and training using common algorithms and libraries. The quality of the hardware and software implementation may vary, but the more significant competitive advantage at this stage is training data sets.

Google’s ocean of data is an advantage

The chatbots respond with texted answers to texted questions. Imagine the chatbot to be that infant mentioned earlier, and imagine one step further that the infant saw every Google search query, response, and selections of responses over the last 18 years. With a little more polish to decide on the best answer, that infant or chatbot would be very good at answering peoples’ texted questions.

Of course, Google is probably working on a response to competitors’ messaging chatbots by adding similar capabilities to its messaging apps. But that’s just one trivial case. Google will use its significant AI assets to improve every computer human interaction by making it more intelligent and more conversational using its huge sea of training data.

Must read: Hidden Cause of Slow Internet and how to fix it
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies