The announcement of the intelligent assistant was an unexpected but not really a surprise announcement. Leaks did not ruin the surprise, but rather Google’s open approach to innovation did. Through participation in open forums and open source projects, published papers, and the release of application programming interfaces (APIs), Google publicly signals its direction.
Google Home was built using the Google Assistant software platform that was also announced this week. Think of Google Home generally as a hardware competitor to Amazon Echo, and Google Assistant is a software platform like Google Now but much more context aware and intelligent.
Google envisions Google Home, which will ship later this year, will become a home control center. It answers multilevel contextual questions from the web, and for users who opt in, it can access calendars and itinerary information to respond to questions and announce reminders. It can also be instructed to stream music directly from a favorite cloud music service or to cast a video to a television. It interfaces with the Nest thermostat and will interface with other standards-compliant smart home devices.
+ More on Network World: +
Compared to mobile phone digital assistants that suffer from power-saving features that interfere with the dedicated natural language processing needed for consistent interpretation of a user’s commands and intermittent mobile internet connections, Google Home is a wall-outlet-powered device—and a better platform to introduce the general public to Google’s digital assistant. Google Home will also serve as a reference design for consumer OEMs that could add Google Assistant features to their products.
Google Assistant is built on three of Google’s technical advantages: search, natural language translation and machine learning. Google’s knowledge graph includes over a billion people, places and things and more than 21 billion semantic relationships between them, which can be used to respond to spoken questions. Google’s natural language voice recognition accuracy is already very good. Improved accuracy using far-field voice recognition and other techniques that were demonstrated during the introduction, represent a large incremental improvement. Applying Google’s machine learning expertise will add context and, if the user opts, personalization that will improve answers.
APIs available
Earlier versions of all of Google’s technical advantages are available for developers to use today to build intelligent digital assistant capabilities into apps. It’s possible for developers to add intelligent assistant features now and add more intelligent assistant features when Google releases more robust versions of the APIs listed below:
Google also announced its Awareness API. This API determines context using sensor-derived data, such as location (e.g., home, work, or a restaurant) and activity (e.g., walking or driving). These basic signals can be combined to extrapolate the user's situation in more specific detail. Fine-grained context from the Awareness API will increase the accuracy of the answers from machine learning systems.
It is a safe bet that the improved APIs to the knowledge graph, natural language recognition and speech and machine learning that Google uses in developing its digital assistant will become available as new releases of existing APIs.
Google Assistant will be likely be monetized through search, AdWords, Google Play and perhaps payment transaction. All of this, including the APIs and business models, needs further development but not a lot of research, and it isn’t dependent on a breakthrough by Google. The improved versions of the APIs need time, though, to go through the stages from preview to beta to a final 1.0 release. A best guess is Google will introduce a preview of the APIs before the Google Home release.