A two-day artificial intelligence (AI) conference could overlook the opposing point of view, especially held in San Francisco\u2014the epicenter of technology innovation and the center of over-hyped technology.\nBut by adding Gary Marcus to the speaker roster of the MIT Technology Review\u2019s EmTech Digital conference, we got a balanced view about AI, including engaging criticism about where AI works, where it does not, and why Marcus says the direction of R&D in the AI field will not lead to artificial general intelligence (AGI). AGI is a theoretical machine intelligence that equals human intelligence.\nMarcus, a neural science professor at New York University and a leading figure in AI, had a special credibility as a critic because he just sold his 2-year-old AI startup Geometric Intelligence to Uber.\n3 things AI can do:\n\nSpeech recognition\nImage recognition, when the number of objects in the image is limited\nNatural language understanding in narrowly bounded domains\n\n6 things AI cannot do: \n\nConversational interfaces\u2014ask Siri something off script, and it breaks down\nAutomated scientific discovery\nAutomated medical diagnosis\nAutomated scene comprehension for blind people\nDomestic robots\nSafe and reliable driverless cars\n\nA couple of Marcus\u2019 points are arguable, such as using image understanding to read radiological charts to diagnose disease or read retinal images to diagnose diabetic retinopathy that has achieved equal and, in some cases, superior accuracy to human clinicians. But generally, he is correct. He chose these six areas to make a point of comparison: for these problems with machine intelligence, machines will have to learn more like a child learns language than the way machines are trained today.\nMarcus used the example of his nearly 3-year-old daughter Chloe\u2019s common sense to explain what a child can do, that artificial intelligence cannot do. He describes a conversation with his daughter in which he tells her that he will put her artwork into mommy\u2019s armoire. Chloe deduction\u2014mama does not see the artwork, but when she opens the armoire she can see it\u2014as an example of the limits of machine intelligence. He asserted that Chloe could infer what he meant with just a few words without having been trained on 10 million similar situations like most AI systems are trained today.\u00a0\nAI based on machine learning\nWith the exception of a few examples of reinforcement learning, such as the Libratus pokerbot and Google\u2019s AlphaGo-bot, most of the AI is based on machine learning that predicts a result using neural networks.\nA good example of this is image recognition. Millions of images are shown to a neural network that puts them into categories to train the computer model. Then a set of correctly categorized images is shown to the computer model that mathematically corrects errors made by the neural network using gradient descent, a sophisticated form of averaged error reduction, to improve the precision of the model to correctly predict the objects in an image.\nFor dramatic effect, Marcus reworded a quote from AI luminary Andrew Ng categorizing the capability of machine intelligence to imitate human intelligence.\n\nMarcus qualified Ng\u2019s statement that anything a person can do in one second can be automated with neural networks and machine learning today. But that\u2019s possible only when there is an enormous corpora of training data to create models to understand topics such as images, translate one language to another, or recognize the meaning of natural language. If the model is to work with high precision, though, the models should be applied to fairly narrow applications that are also not subject to too much change requiring frequent retraining, and the application of the models must have a tolerance for errors, albeit sometimes very low error rates. These systems do not really understand, asserted Marcus, but predict the most likely meaning.\nThe state of the art in machine learning prediction is about 98 percent. Marcus said this type of precision is OK for systems such as Amazon\u2019s machine learning recommendation system that might recommend another one of his books to someone who bought one of his other books. If the recommendation does not lead to a satisfying read 2 percent of the time, the consequential cost is just $20. He questions the consequences, though, of a self-driving car\u2019s pedestrian detector that is 98 percent accurate or an eldercare robot that drops geriatric patients only 2 percent of the time.\nThe talk was not all criticism. Marcus says one day intelligent machines will be able to perform complex tasks such as read research papers about a disease like bladder cancer and propose a novel treatment, but not yet\u2014and perhaps not in his lifetime.\nMarcus recommends a worldwide, government-funded AI research program like CERN to focus on the long-term AI research needed to create an AGI. CERN operates a nuclear accelerator that cost $4.6 billion to build and is shared among researchers studying particle physics.\nFundamental scientific questions that might take a decade or two of research to produce results need to be answered before an AGI can be created. Blue-sky researchers engaged in fundamental AI research are limited in budget and computational resources. Private sector research is too focused on the technologies that achieve shorter-term financial goals, which Marcus says will not lead to machines with common sense and AGI.