Will future developments in the realm of Artificial Intelligence be like the wild west or a more controlled situation? The real answer is probably somewhere in the middle but the government at least would like to see more measured research and development.
The White House today issued report on future directions for AI called Preparing for the Future of Artificial Intelligence. In it, the report comes to several conclusions – some obvious and some perhaps less so. For example, it accepts that AI technologies will continue to grow in sophistication and ubiquity, thanks to AI R&D investments by government and industry.
More on Network World: The weirdest, wackiest and coolest sci/tech stories of 2016 (so far!)+
The report also advocates for AI standards, stout cybersecurity and control over its potential impact on jobs.
“This plan assumes that the impact of AI on society will continue to increase, including on employment, education, public safety, and national security, as well as the impact on U.S. economic growth. Third, it assumes that industry investment in AI will continue to grow, as recent commercial successes have increased the perceived returns on investment in R&D,” the report states.
Continuing, “this plan assumes that some important areas of research are unlikely to receive sufficient investment by industry, as they are subject to the typical under-investment problem surrounding public goods. Lastly, this plan assumes that the demand for AI expertise will continue to grow within industry, academia, and government, leading to public and private workforce pressures.”
Some of the important aspects of the report included a number of recommendations and observations including:
- Develop effective methods for human-AI collaboration: Rather than replace humans, most AI systems will collaborate with humans to achieve optimal performance. Research is needed to create effective interactions between humans and AI systems. Understand and address the ethical, legal, and societal implications of AI. We expect AI technologies to behave according to the formal and informal norms to which we hold our fellow humans. Research is needed to understand the ethical, legal, and social implications of AI, and to develop methods for designing AI systems that align with ethical, legal, and societal goals.
- Ensure the safety and security of AI systems. Before AI systems are in widespread use, assurance is needed that the systems will operate safely and securely, in a controlled, well-defined, and well-understood manner. Further progress in research is needed to address this challenge of creating AI systems that are reliable, dependable, and trustworthy.
- Develop shared public datasets and environments for AI training and testing. The depth, quality, and accuracy of training datasets and resources significantly affect AI performance. Researchers need to develop high quality datasets and environments and enable responsible access to high-quality datasets testing and training resources.
- AI embedded in critical systems must be robust in order to handle accidents, but should also be secure to a wide range of intentional cyber-attacks. Security engineering involves understanding the vulnerabilities of a system and the actions of actors who may be interested in attacking it.
- While some cybersecurity risks are specific to AI systems. For example, one key research area is “adversarial machine learning” that explores the degree to which AI systems can be compromised by “contaminating” training data, by modifying algorithms, or by making subtle changes to an object that prevent it from being correctly identified (e.g., prosthetics that spoof facial recognition systems). The implementation of AI in cybersecurity systems that require a high degree of autonomy is also an area for further study.
- The development of standards must be hastened to keep pace with the rapidly evolving capabilities and expanding domains of AI applications. Standards provide requirements, specifications, guidelines, or characteristics that can be used consistently to ensure that AI technologies meet critical objectives for functionality and interoperability, and that they perform reliably and safely. Adoption of standards brings credibility to technology advancements and facilitates an expanded interoperable marketplace.
- One example of an AI-relevant standard that has been developed is P1872-2015 (Standard Ontologies for Robotics and Automation), developed by the Institute of Electrical and Electronics Engineers (IEEE). This standard provides a systematic way of representing knowledge and a common set of terms and definitions. These allow for unambiguous knowledge transfer among humans, robots, and other artificial systems, as well as provide a foundational basis for the application of AI technologies to robotics.
- While improved hardware can lead to more capable AI systems, AI systems can also improve the performance of hardware. This reciprocity will lead to further advances in hardware performance, since physical limits on computing require novel approaches to hardware designs. AI-based methods could be especially important for improving the operation of high performance computing (HPC) systems. Such systems consume vast quantities of energy. AI is being used to predict HPC performance and resource usage, and to make online optimization decisions that increase efficiency; more advanced AI techniques could further enhance system performance.
- AI can also be used to create self-reconfigurable HPC systems that can handle system faults when they occur, without human intervention. Improved AI algorithms can increase the performance of multi-core systems by reducing data movements between processors and memory—the primary impediment to exascale computing systems that operate 10 times faster than today’s supercomputers. In practice, the configuration of executions in HPC systems are never the same, and different applications are executed concurrently, with the state of each different software code evolving independently in time. AI algorithms need to be designed to operate online and at scale for HPC systems.
- AI technologies can maximize efficient use of bandwidth and automation of information storage and retrieval. AI can improve filtering, searching, language translation, and summarization of digital communications, positively affecting commerce and the way we live our lives.
- AI systems can assist scientists and engineers in reading publications and patents, refining theories to be more consistent with prior observations, generating testable hypotheses, performing experiments using robotic systems and simulations, and engineering new devices and software.
Additional work in AI standards development is needed across all subdomains of AI Standards are needed to address:
- Software engineering: to manage system complexity, sustainment, security, and to monitor and control emergent behaviors;
- Performance: to ensure accuracy, reliability, robustness, accessibility, and scalability;
- Metrics: to quantify factors impacting performance and compliance to standards;
- Safety: to evaluate risk management and hazard analysis of systems, human computer interactions, control systems, and regulatory compliance;
- Usability: to ensure that interfaces and controls are effective, efficient, and intuitive;
- Interoperability: to define interchangeable components, data, and transaction models via standard and compatible interfaces;
- Security: to address the confidentiality, integrity, and availability of information,as well as cybersecurity;
- Privacy: to control for the protection of information while being processed, when in transit, or being stored;
- Traceability: to provide a record of events (their implementation, testing, and completion), and for the curation of data; and
- Domains: to define domain-specific standard lexicons and corresponding frameworks
Check out these other hot stories: