All you need to know about machine learning in 12 minutes, 45 seconds

Yann LeCun, artificial intelligence pioneer and head of Facebook’s AI research group, explains machine learning in six short videos

Everything you need to know about machine learning in 12 minutes, 45 seconds

Facebook wants to grow the community of companies that understand and use artificial intelligence (AI) to accelerate progress in the field. Tech leaders Facebook, Google, Microsoft and IBM believe AI is the next platform that will follow mobile. During the last October, Google CEO Sundar Pichai described the AI platform shift, paraphrasing Facebook mobile-first tagline as an AI-first world.

Facebook publishes its AI and machine learning research, speaks at conferences and licenses its software under open source licenses to accelerate development and demystify AI. Today, in a blog post, Facebook released six short videos, narrated by Yann LeCun, head of Facebook's AI research group and machine learning pioneer, to introduce developers, data scientists and people interested in the most important AI topics.

The videos linked below explain the most-talked-about topics in machine learning today and are intended to encourage technically interested people at all levels to want to learn more.

Introduction to AI (2 minutes, 40 seconds)

An explanation of how machines can be intelligent.

Machine learning (4 minutes, 17 seconds)

Machine learning uses models running on neural networks built with libraries such as Tensorflow, Torch or Caffe that can be trained with data instead of programmed to recognize and interpret application-specific cases such as text, images and video. A neural network is a computing system made up of a number of simple, highly interconnected processing elements that process information based on their dynamic state response to external inputs.

Feeding the data into the generic algorithms of the model teaches the model to operate on data without programming. To train a model to translate Russian to English, large sample data sets are fed into the model until the probability of the predicted accuracy of a Russian sentence translated into English is very high. Likewise with recognizing the contents of an image; large numbers of images are feed into the model until the probability of the accuracy of the model’s recognition of images is very high.

That this works is confirmed by experience. Facebook users with multilingual friends can read foreign language posts by clicking on translate. Google Photo users can let Google organize their photos based on the content by giving the app just a little information about who is in the photo.

Gradient descent (2 minutes, 30 seconds)

Machine learning learns by its mistakes. During training, when the neural network incorrectly predicts or misinterprets an input, it is because some of the neural networks processing elements have the incorrect values and the algorithm produces the wrong answer. For example, an image of a cat is recognized as a turnip. All of the incorrect processing element states are collected and reduced to an error function, which is applied to correct the algorithm.

The processing element errors can be simplified to three data points on a graph, and the correct processing element state is a line drawn on the graph that minimizes the difference between the line and the data points. The error function corrects all of the model’s errors. The errors have many more data points, and the correct state is much more complex than a line, but conceptually it is the same. By repeating the process, correcting for more errors, the model’s errors are reduced and accuracy of a correct prediction increased.

Deep learning (1 minute, 3 seconds)

More layers of processing elements are added to the neural network, and the states of the multiple layers of processing elements are represented as vectors instead of data points. Deep learning can also change how the models learn.

Simpler models require labeled data sets to train. The training method is called supervised learning, and it uses data that has been categorized. For example, an image of a cat is labeled cat, an image of a dog labeled dog, etc.

Deep learning can also be unsupervised, meaning the model learns without human categorization and labeling.

Back propagation (1 minute, 42 seconds)

Back propagation is the application of gradient decent to correct the errors of the multi-layer neural network.

Convolutional neural nets (1 minute, 50 seconds)

Convolutional neural nets, or convnets, were inspired by the visual cortex in animals. Neural networks can be tiled to understand a part of an image, part of a sentence, a change of location of an element in an image, or change of location of a word in a sentence.

These videos are indexed on Facebook’s engineering website. Another good source of introductory neural network and machine learning information is Adam Geitgey’s five-part blog series, Machine Learning Is Fun!

Copyright © 2016 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022