AI heading back to the trough

The expectations over artificial intelligence (AI) are becoming too inflated. AI will indeed change everything, but not any time soon.

AI-powered devices such as Google Home will change things, but not anytime soon
Credit: Derek Walter

I like Gartner’s concept of the technology hype cycle. It assumes that expectations of new technologies quickly ramp to an inflated peak, drop into a trough of disillusionment, then gradually ascend a slope of enlightenment until they plateau. Of course, not all technologies complete the cycle or transition through the stages at the same pace.

Artificial intelligence (AI) has arguably been in the trough for 60 years. I am thinking of Kubrick’s HAL and Roddenberry’s “computer” that naturally interact with humans. That’s a long trough, and despite popular opinion, the end is nowhere in sight.

+ Also on Network World: Using artificial intelligence to teach computers to see +

There’s so much excitement and specialized research taking place that AI has fragmented into several camps such as heuristic programming for game-playing AI, natural language processing for conversational AI, and machine learning for statistical problems. The hype is building again, and just about every major tech company and countless startups are racing toward another inflated peak and subsequent trough.

What changed with AI?

The reason expectations are so high is because of breakthroughs in three broad categories: compute, data and algorithms. The compute innovations refer to general cloud services and specific improvements in processing arrays and graphics processing units (GPUs).

The availability of huge data sets has also been important for machine learning. Large labeled and annotated data sets have enabled progress in computer vision, natural language and speech recognition. There are numerous public data sets available, plus many of the larger firms are also using their own private data.

The third ingredient is advanced algorithms that with compute power and data provide responses or predictions. For example, algorithms are used to recommend movies to watch, stocks to trade and updates to include on a timeline. The concept is as old as computing itself, but suddenly vastly improved.

Or is it? While a computer beat a human chess champion 21 years ago, it wasn’t until two months ago that a different computer beat a human champion at Go. There was an impressive milestone on Jeopardy in 2011 and more recently a breakthrough regarding Ms. Pac-Man.

AI will definitely change the world, but just don’t hold your breath, at least not regarding general purpose AI. Specialized AI, such as self-driving cars, is progressing quickly. The general AI stuff is almost useless.

For AI devices, answering question is easy, understanding the question is hard

I have yet to find any general AI solution to be helpful. For example, Google Assistant often suggests to me the best time to leave for the airport. It’s invariably wrong. It largely bases its recommendation on my current location and traffic conditions. My personal algorithm for determining the best time to leave for the airport involves relatively big data. I consider variables such as how I intend to get there (car, bus or shuttle). If by car, then I factor in where I intend to park. Then there's gate and concourse information; whether I have PRE on my boarding pass; and whether I intend to eat at the airport before departure.

Usually, I take the bus to the airport and query Google about the bus schedule. The famed Google Assistant can’t recognize that pattern. Telling me when to leave to catch the airport bus could be more helpful.

But having the data to answer the question isn’t Google’s problem. The difficulty lies in understanding the question. Emmanuel Mogenet, head of Google Research Europe, recently highlighted the limitations of natural language processing with a similar example. Google Assistant can’t answer “will it be dark when I get home?” Let me put that in context. Google can’t answer this question even when it knows where the user is, where the user lives, and when the sun sets at that location.

This is not a question that has an answer Google can look up. It needs to pull all this information together, and doing that requires truly understanding the relationship between the question and the data. That’s a hard puzzle to solve. Now consider that Google Assistant is six times more likely to correctly answer a question than Amazon’s Alexa.

Alexa now boasts more than 15,000 “skills.” These skills are largely simple web queries. The AI part is using speech instead of a keyboard.

The search for intelligence continues

AI has a ways to go, but that’s not even the whole problem. As with my airport example above, AI works best when it has access to contextual data. That often means exposing personal and confidential data to the service, which is a practice riddled with concerns and liabilities. It’s not as if security breaches are rare.

There’s also the little issue that AI is very hard to test. Developing self-driving cars requires driving cars millions of miles. That just doesn’t scale, so we keep discovering gaps with each new application. Even self-driving car behavior can be surprising. Volvo recently found that its self-driving cars cannot recognize kangaroos. Oops.

I think it’s important to reset expectations about AI. It’s fantastic that some people find Siri, Google Assistant and Alexa helpful sometimes. We should briefly celebrate the tremendous progress in kitchen timer technology—and then get back to work.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Must read: 10 new UI features coming to Windows 10