Google announced TensorFlow yesterday, releasing its research and successful internal scaling of machine learning as an open source project under an Apache 2.0 license. TensorFlow will accelerate the adoption of machine learning by the thousands of creative product development teams that don't have Google's large-scale machine learning research resources.
A good example of the implications of machine learning through interactions of humans and systems is Tesla's Autopilot beta. As drivers interact with and correct Autopilot, drivers have reported that the guidance system self improves.
Google has invested in advanced research in machine learning, applying the company's top artificial intelligence/deep learning talent to the Google Brain Project, launched by Andrew Ng and now under John Giannandrea in conjunction with top academic labs, such as Stanford and Carnegie Mellon, to improve Google's products.
Mobile device users have accepted and come to expect accurate speech recognition, language translation, human-like interpretation of photos and videos, and anticipated search results. These are all the result of Google's machine learning, the product of Google's neural network research, which made headlines when it learned to identify cats in untagged videos. At first, the experience may seem creepy, but eventually people just accept systems that anticipate needs and present options in the context of "recommendation" metaphors.
The principle is simple – machines programmed the right way can learn from data (the more data the better) and make decisions at unprecedented speeds. For example, human senses feel pressed to their limits when driving at 70 miles per hour, but at those speeds Tesla's Autopilot can sense, compute, and make a decision in a fraction of time. When successfully engineered machine learning-based systems encounter a human interaction, human intelligence is transferred and the system improves.
In 2011, Google created DistBelief for its machine learning and artificial intelligence researchers to use in building increasingly larger neural networks of thousands of cores that learned from large complex datasets to carry out complex tasks, like recognizing images and interpreting poorly articulated language. DistBelief demonstrated that machine intelligence could operate at Google's billion-scale of users.
Creating a system like DistBelief for use within the confines of Google was an internal success, but couldn't be released to the independent machine learning or general independent developer communities. DistBelief was narrowly targeted to neural networks, hard to configure, and tightly coupled with Google's internal infrastructure. What was missing was the engagement of the machine learning community to learn from one another through sharing code and dynamic experimentation, the way machines learn from interaction with human, recursively improving machine learning development through the interaction of developers.
Google's second-generation open-source machine learning system TensorFlow was specifically designed to correct DistBelief's shortcomings. Google built TensorFlow for more general applications to be more flexible, portable, and within reach of more developers. Built for production machine learning applications, it is intended to be fast and scalable. In some benchmarks, TensorFlow was twice as fast as DistBelief.
Deep learning, machine learning, and artificial intelligence are all some of Google's core competencies, where the company leads Apple and Microsoft. If successful, Google's strategy is to maintain this lead by putting its technology out in the open to improve it based on large-scale adoption and code contributions from the community at large.