Are we in artificial intelligence winter?

Intelligence Advanced Research Projects Activity wants to move artificial intelligence development along more quickly

Credit: Thinkstock

Can the development of artificial intelligence technology be kicked up a notch? Scientists at Intelligence Advanced Research Projects Activity (IARPA) certainly hope so and recently issued a Request For Information about how AI advances could be made more quickly and consistently.

“Artificial intelligence, defined here as computer simulation of cognitive processes such as perception, recognition, reasoning, and control, have captured the public’s imagination for over 60 years. However, artificial intelligence research has proceeded in fits and starts over much of that time, as the field repeats a boom/bust cycle characterized by promising bursts of progress followed by inflated expectations and finally disillusionment, leading to what has become known as an “AI winter” – a long period of diminished research and funding activity,” IARPA wrote.   IARPA is the high-risk, high-reward research arm of the Office of the Director of National Intelligence.

+More on Network World: The weirdest, wackiest and coolest sci/tech stories of 2015+

IARPA noted, “conventional wisdom has been that new algorithms were the limiting factor in making steady progress towards artificial intelligence. However, recent advances in machine learning, a sub-field of artificial intelligence, have established that historical algorithms in conjunction with high-performance computers can be used to achieve nearly human-level performance on diverse tasks such as image and speech recognition, language translation, and video game play. In each of these instances, and in many others, rapid progress was facilitated by the availability of massive amounts of training data well-suited to the problem under study.”

+More on Network World: Spectacular black hole space images+

It is in these increased data training resources IARPA is focusing this RFI asking the artificial intelligence research community which training resources, if created, would be most likely to drive AI progress.

IARPA asks these questions:

  • Which problem domain(s) has the greatest potential to benefit from the availability of new training resources and why?
  • What new training resources are needed to achieve significant progress in this domain? How should these resources be structured? How do the proposed resources compare with currently available resources?
  • What kind of effort is needed to create and/or curate these training resources? What technical, logistical, and/or legal challenges would be associated with such an effort? How much would such an effort cost, and how long would it take? How much effort and money would be required to store, maintain, distribute, and/or utilize the proposed training resources?
  • Who would be the major stakeholders in the proposed training resources? How would these stakeholders use the proposed resources?
  • Annual challenges (e.g. ImageNet Large Scale Visual Recognition Challenge) employing a standard set of data for training and/or evaluation have helped to catalyze progress in many machine learning problem domains. Should a challenge be created in the proposed problem domain, and if so, how should it be designed, implemented, and judged?

The RFI is just one of IARPA’s recent forays into advance artificial intelligence and machine learning technologies. In January the agency announced a program whose chief goal is to reverse engineer human brain algorithms. The five-year program called Machine Intelligence from Cortical Networks (MICrONS) would offer participants a “unique opportunity to pose biological questions with the greatest potential to advance theories of neural computation and obtain answers through carefully planned experimentation and data analysis.”

IARPA said that “despite significant progress in machine learning over the past few years, today’s state of the art algorithms are brittle and do not generalize well. In contrast, the brain is able to robustly separate and categorize signals in the presence of significant noise and non-linear transformations, and can extrapolate from single examples to entire classes of stimuli.”

Check out these other hot stories:

 IRS warns: 400% flood in phishing and malware this tax year alone

Einstein was right: Gravitational waves exist!

The iconic Boeing 747 is almost 50!

Feds grab $39 million worth of fake sports gear ahead of NFL Super Bowl 50

Start Trek’s USS Enterprise gets serious Smithsonian restoration

IRS Scam: 5,000 victims cheated out of $26.5 million since 2013

NASA’s big rocket will carry 13 cool tiny satellites

Communication breakdown: US Secret Service needs a radio-system upgrade

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Must read: 10 new UI features coming to Windows 10