The National Artificial Intelligence Research and

The National Artificial Intelligence Research and Development Strategic Plan
Identifies gaps in the private sector’s Artificial Intelligence research and development, coordinates Federal investment, and ensures
that the US takes full advantage of AI technology.
Updated last March 10, 2017
for the 10/12/2016 Report.
WHAT IT DOES
The National Science and Technology Council (NSTC) and the Subcommittee on Networking and Information Technology Research
and Development (NITRD) issued the report, The National Artificial Intelligence Research and Development Strategic Plan to
establish a set of objectives for federally-funded AI research conducted within and outside the government, including academic
research. The goal of this research is to produce new AI knowledge and technologies that benefit society while minimizing negative
impacts. This report fulfills these goals by outlining seven strategies and two recommendations for federal AI research and
development (R&D) in the context of fourteen different industry applications.
This report prioritizes strategies for future federal investment in research areas where private investment is unlikely. Benefits are
described in terms of increased economic prosperity, improved quality of life, and strengthened national security as applied to
particular industry applications expected to benefit from advances in AI. The report states that increased economic prosperity can
be realized through AI developments in applications that include manufacturing, logistics, finance, transportation, agriculture,
marketing, communication, and science and technology; improved educational opportunities and quality of life will come from AI
contributions to education, medicine, law and personal services; and enhanced national and homeland security will be achieved
through AI advances applied to security and law enforcement and safety and prediction.
The R&D strategies included in the report are organized according to Basic R&D areas of AI (identified in strategies 1 and 2) and
Cross-Cutting Foundations of AI (strategies 3 through 7). Cross-cutting foundations include areas of research whose discoveries are
applicable throughout the field of AI, while basic R&D areas of AI are more narrowly focused and are intended to build on the crosscutting strategies.
The strategies proposed for federal AI R&D that apply to basic R&D include the following:
Strategy 1: Make Long-Term Investments in AI Research
While the report notes that an important component of long-term research is incremental research with predictable outcomes, it
argues that long-term sustained investments in high-risk research can lead to high-reward payoffs. Areas with potential long-term
payoffs include:
Development of more-advanced machine learning algorithms that can identify the useful information hidden in big data;
Enhancements in how AI systems detect, classify, identify, and recognize objects;
Improved understanding of the theoretical capabilities and limitations for AI and the extent to which human-like solutions are
possible with AI algorithms;
Research on “general-purpose” AI that exhibits the flexibility and versatility of human intelligence in a broad range of cognitive
domains including learning, language, perception, reasoning, creativity, and planning;
1
Scalable AI systems that collaborate effectively with each other and with humans to achieve results not possible with a single
system;
Fostering research on how AI can communicate and operate in a more humanlike fashion;
Robots that are more capable, reliable, and easier to use;
Advanced hardware for improved and faster AI operations; and
AI software that advances hardware performance.
Strategy 2: Develop Effective Methods for Human-AI Collaboration
As AI is a supplement to human activity, the Report indicates that best practices for AI-human interaction must be designed to avoid
excessive complexity and to address the recognized effects of using automated systems, such as undertrust (not fully using the
automation), or overtrust (over-utilizing automation, leading to complacency). These aims can be achieved specifically by:
Seeking new algorithms for AI that enable intuitive interaction with users and seamless machine-human collaborations;
Developing techniques for AI that can improve the thinking and functioning of humans;
Developing techniques for how AI effectively presents information to users in real-time in formats that are easy to interpret; and
Developing more effective language processing systems to allow AI machines to interpret written or verbal commands regardless
of the clarity of the commands.
The strategies for Cross-Cutting Foundations of AI R&D include:
Strategy 3: Understand and Address the Ethical, Legal, and Societal Implications of AI
The Report points out that research needs to account for the ethical, legal, and social implications of AI, as well as developing
methods for AI that align with ethical, legal, and social principles. These aims can be achieved specifically by:
Improving fairness, transparency, and accountability in AI design to avoid bias;
Building ethical AI functions that reflect an appropriate value system, developed through examples that indicate preferred
behavior when presented with difficult moral issues or with conflicting values; and
Designing computer architectures that incorporate ethical reasoning.
Strategy 4: Ensure the Safety and Security of AI Systems
The Report states that further research is needed to create AI that is reliable, transparent, and secure. This aim can be achieved
specifically by:
Improving how systems that include AI will explain their reasoning and decisions to users;
Building trust with users by creating accurate, reliable systems with informative, user-friendly interfaces;
Improving methods for AI systems’ verification (establishing that a system meets formal specifications) and validation
(establishing that a system meets the users’ operational needs);
Increasing security against cyber-attacks on or by AI systems; and
Building long-term AI safety by maintaining alignment with human values.
Strategy 5: Develop Shared Public Datasets and Environments for AI Training and Testing
The Report points out that additional research is needed to develop high-quality datasets and environments for a wide variety of AI
applications, and to enable responsible access to good datasets and testing/training resources. According to the Report, these aims
can be achieved specifically by:
Developing and making available a wide variety of datasets to meet the needs of AI interests and applications;
Making training and testing resources responsive to commercial and public interests; and
2
Developing and distributing software libraries and toolkits.
Strategy 6: Measure and Evaluate AI Technologies through Standards and Benchmarks
The Report states that establishment and adoption of standards, benchmarks, and testing methods are essential for guiding and
promoting R&D of AI technologies. These aims can be achieved specifically by:
Developing requirements, specifications, guidelines, or characteristics that can be used consistently to ensure that AI
technologies meet critical objectives for functionality and interoperability, and that the technologies perform reliably and safely;
Establishing AI technology quantitative benchmarks to objectively measure AI accuracy, complexity, operator trust and
competency, risk, uncertainty, transparency, unintended bias, performance, and economic impact;
Increasing the availability of AI testbeds across all aspects of AI, including providing limited access to sensitive information for
improving AI systems designed to protect confidential data; and
Engaging the AI community (e.g., governments, industry, and academia) in developing standards and benchmarks.
Strategy 7: Better Understand the National AI R&D Workforce Needs
The Report’s list of strategies concludes by reporting that AI experts are in short supply, with demand for these people expected to
continually escalate. Data is needed to characterize the current state of the AI R&D workforce, including the needs of academia,
government, and industry.
***
NITRD clarifies its intent in issuing this report by stating these priorities as a supplement, not a replacement, for any pre-existing
federal research agendas. Furthermore, while this report does not explicitly address the appropriate scope or application of AI
technologies, NITRD recognizes the necessity of addressing these issues and identifies relevant reports to that effect.
The report concludes by offering two recommendations to the Federal government for strengthening and promoting the success of
this strategic plan:
Develop an AI R&D implementation framework to identify science and technology opportunities and support effective
coordination of AI R&D investments; and
Study the national landscape for creating and sustaining a healthy AI R&D workforce.
RELEVANT SCIENCE
There is currently no universally agreed-upon definition of AI. As quoted in Stanford University’s 100-year study of AI, Nils J. Nilsson
defines AI research as “activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to
function appropriately and with foresight in its environment.”
Here, intelligence is understood as a measure of a machine’s ability to successfully achieve an intended goal. Like humans,
machines exhibit varying levels of intelligence subject to the machine’s design and training. However, there are different
perspectives on how to define and categorize AI. In 2009, a foundational textbook classified AI in four categories:
Ones that think like humans;
Ones that think rationally;
Ones that act like humans; and
Ones that act rationally.
Most of the progress seen in AI has been considered "narrow," having addressed specific problem domains like playing games,
3
driving cars, or recognizing faces in images. In recent years, AI applications have surpassed human abilities in some narrow tasks,
and rapid progress is expected to continue, opening up new opportunities in critical areas such as health, education, energy, and the
environment. This is in contrast to “general” AI, which would replicate intelligent behavior equal to or surpassing human abilities
across the full range of cognitive tasks. Experts involved with the NSTC Committee on Technology believe that it will take decades
before society advances to artificial "general" intelligence.
According to Stanford University’s 100-year study of AI, by 2010, advances in three key areas of technology intersected to increase
the promise of AI in the US economy:
Big data: large quantities of structured and unstructured data amassed from e-commerce, business, science, government, and
social media on a daily basis;
Increasingly powerful computers: greater storage and parallel processing of big data; and
Machine learning: using increased access to big data as raw materials, increasingly powerful computers can be taught to
automatically improve their performance tasks by observing relevant data via statistical modeling.
Key AI applications include the following:
Machine learning is the basis for many of the recent advances in AI. Machine learning is a method of data analysis that attempts
to find structure (or a pattern) within a data set without human intervention. Machine learning systems search through data to
look for patterns and adjust program actions accordingly, a process defined as training the system. To perform this process, an
algorithm (called a model) is given a training set (or teaching set) of data, which it uses to answer a question. For example, for a
driverless car, a programmer could provide a teaching set of images tagged either “pedestrian” or “not pedestrian.” The
programmer could then show the computer a series of new photos, which it could then categorize as pedestrians or nonpedestrians. Machine learning would then continue to independently add to the teaching set. Every identified image, right or
wrong, expands the teaching set, and the program effectively gets “smarter” and better at completing its task over time.
Machine learning algorithms are often categorized as supervised or unsupervised. In supervised learning, the system is
presented with example inputs along with desired outputs, and the system tries to derive a general rule that maps input to
outputs. In unsupervised learning, no desired outputs are given and the system is left to find patterns independently.
Deep learning is a subfield in machine learning. Unlike traditional machine learning algorithms that are linear, deep learning
utilizes multiple units (or neurons) stacked in a hierarchy of increasing complexity and abstraction inspired by structure of human
brain. Deep learning systems consists multiple layers and each layer consists of multiple units. Each unit combines a set of input
values to produce an output value, which in turn is passed to the other unit downstream. Deep learning enables the recognition
of extremely complex, precise patterns in data.
Advances in AI will bring the possibility of autonomy in a variety of systems. Autonomy is the ability of a system to operate and
adapt to changing circumstances without human control. It also includes systems that can diagnose and repair faults in their own
operation such as identifying and fixing security vulnerabilities.
Important areas of AI research:
AI researcher John McCarthy of Stanford University describes AI research and development as comprising of both theory and
experimentation. AI theory includes contemplating the ways in which one defines the field of research itself as well as how to
integrate AI with human notions of rationality, morality, and ethics. AI experimentation involves attempting to mimic human and
animal physiology and psychology in machines as well as problem solving for actions outside the scope of biological organisms.
Experimental research in artificial intelligence includes several key areas that mimic human behaviors, including reasoning,
knowledge representation, planning, natural language processing, perception, and generalized intelligence.
Reasoning includes performing sophisticated mental tasks that people can do (e.g., play chess, solve math problems).
Knowledge representation is information about real-world objects the AI can use to solve various problems. Knowledge in this
context is usable information about a domain, and the representation is the form of the knowledge used by the AI.
Planning and navigation includes processes related to how a robot moves from one place to another. This includes identifying
safe and efficient paths, dealing with relevant objects (e.g., doors), and manipulating physical objects.
4
Natural language processing includes interpreting and delivering audible speech to and from users.
Perception research includes improving the capability of computer systems to use sensors to detect and perceive data in a
manner that replicates humans’ use of senses to acquire and synthesize information from the world around them.
Ultimately, success in the discrete AI research domains could be combined to achieve generalized intelligence, or a fully
autonomous “thinking” robot with advanced abilities such as emotional intelligence, creativity, intuition, and morality.
RELEVANT EXPERTS
Vincent Conitzer, Ph.D. is Kimberly J. Jenkins University Professor of New Technologies, Professor of Computer Science, Professor of
Economics, and Professor of Philosophy at Duke University.
“Artificial intelligence researchers have made rapid progress in recent years. The resulting capabilities allow us to
make the world a better place, but they have also led to a broad variety of concerns. How should autonomous
vehicles be designed and regulated? Will AI cause massive technological unemployment? Will weapons systems
become increasingly autonomous, and should autonomous weapons be banned? Is there perhaps even a chance that
AI will end up broadly superseding human capabilities, making us obsolete at best and extinct at worst?”
Relevant publications:
Conitzer, Vincent. 2016. "Artificial Intelligence: Where’s the Philosophical Scrutiny?" Prospect, May 4. Accessed February 25,
2017. http://www.prospectmagazine.co.uk/science-and-technology/artificial-inte....
Walter Sinnott-Armstrong, Ph.D. is Chauncey Stillman Professor of Practical Ethics in the Department of Philosophy and the Kenan
Institute for Ethics at Duke University, as well as core faculty in the Duke Institute for Brain Sciences, the Duke Center for Cognitive
Neuroscience, and the Duke Center for Interdisciplinary Decision Sciences.
“New developments in artificial intelligence are raising many profound ethical issues throughout society, not only in
autonomous cars and weapons but also in kidney exchanges and criminal justice. One central question is whether
morality can be built into an artificial intelligence system itself.”
Relevant publications:
Conitzer, Vincent, Walter Sinnott-Armstrong, Jana Saich Borg, Yuan Deng, and Max Krammer. 2017. "Moral Decision Making
Frameworks for Artificial Intelligence." Paper presented at the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco,
California, February 4 – 9.
BACKGROUND
The creation of this report was initially requested in June of 2016 by NSTC’s Subcommittee on Machine Learning and Artificial
Intelligence, an organization founded earlier in May of 2016. This subcommittee is charged with following developments in AI within
the Federal Government, private industry, and abroad, and assist with further federal development in this area. The direction and
organization provided by this new subcommittee is the first to focus on coordinating inter-agency investment for the future of AI.
This report was produced following the White House's June 22nd, 2016 Public Request for Information regarding AI, as well as a
series of workshops in 2016 to address the applications of AI. The workshops included:
AI, Law, and Policy (May 24, 2016);
AI for Social Good (June 7, 2016);
5
Future of AI: Emerging Topics and Societal Benefit at the Global Entrepreneurship Summit (June 23, 2016);
AI Technology, Safety, and Control (June 28, 2016); and
Social and Economic Impacts of AI (July 7, 2016).
ENDORSEMENTS & OPPOSITION
Endorsements:
Following the publication of this report, National Science Foundation (NSF) Director France Córdova issued the following in a press
release:
“The National Science Foundation funds a significant amount of fundamental research in artificial intelligence at U.S. academic
institutions.… NSF's investments in this growing area align with and support the National Artificial Intelligence Research and
Development Strategic Plan, and they will help to ensure that our nation's scientists and engineers remain at the forefront of
advances in AI."
Opposition:
At present, there has not been any publicly reported opposition to this report. However, author and computer scientist Jaron Lanier
has articulated arguments against the advancement of AI technology:
AI threatens to fundamentally alter the human way of life and should not be developed without adequate planning and foresight;
To say that algorithms can mimic consciousness is to devalue what it means to be human;
Cheap AI labor may replace human labor and deepen the economic inequality between business owners and human laborers;
and
A self-aware and self-evolving machine could challenge humans for critical resources.
STATUS
The Report was released on October 12, 2016 during the Obama administration and is currently hosted by the Obama White House
Archives.
OTHER RELATED GOVERNMENTAL ACTIONS
This report was issued concurrently with another report created by the NSTC’s Committee on Technology, Preparing for the Future of
Artificial Intelligence (SciPol brief available), which surveys the current state of AI, its existing and potential applications, and the
questions that progress in AI raises for society and public policy.
A subsequent report, issued by the Executive Office of the President, was published two months later, Artificial Intelligence,
Automation, and the Economy (SciPol brief available), which details the economic impacts of artificial intelligence.
As indicated in the Report, the applications of AI have far reaching implications for several federal initiatives and strategic plans
including:
Federal Big Data Research and Development Strategic Plan;
Federal Cybersecurity Research and Development Strategic Plan;
National Privacy Research Strategy;
National Nanotechnology Initiative Strategic Plan;
6
National Strategic Computing Initiative;
Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative; and
National Robotics Initiative.
PRIMARY AUTHOR
Scott "Esko" Brummel, MA Candidate
EDITOR(S)
Michael Clamann, PhD, CHFP
CITATION
Duke SciPol, “The National Artificial Intelligence Research and Development Strategic Plan” available at
http://scipol.duke.edu/content/national-artificial-intelligence-research-and-development-strategic-plan (03/10/2017).
7