What is Artificial Intelligence (AI)?
Artificial Intelligence (AI) refers to computer systems that think and act like humans, and think and act rationally.* AI is rapidly transforming our world with innovations like autonomous vehicles driving our city streets, personal digital assistants in our homes and pockets, and direct human brain interfaces that can help a paralyzed person feel again when using a brain-controlled robotic arm.
In recent years, the field of AI has experienced a remarkable surge in capabilities. Factors contributing to this include:
- Improved machine learning (ML) techniques,
- Availability of massive amounts of training data,
- Unprecedented computing power, and
- Mobile connectivity.
AI-enabled systems are beginning to revolutionize fields such as commerce, healthcare, transportation and cybersecurity. It has the potential to impact nearly all aspects of our society including our economy, yet its development and use come with serious technical and ethical challenges and risks. AI must be developed in a trustworthy manner to ensure reliability and safety.
Cultivating Trust in AI Technologies
NIST cultivates trust in technology by developing and deploying standards, tests and metrics that make technology more secure, usable, interoperable and reliable, and by strengthening measurement science. This work is critically relevant to building the public trust of rapidly evolving AI technologies.
In contrast with deterministic rule-based systems, where reliability and safety may be built in and proven by design, AI systems typically make decisions based on data-driven models created by machine learning. Inherent uncertainties need to be characterized and assessed through standardized approaches to assure the technology is safe and reliable. Evaluation protocols must be developed and new metrics are needed to provide quantitative support to a broad spectrum of standards including data, performance, interoperability, usability, security, and privacy.
NIST is advancing foundational research in measuring and assessing AI technologies ensuring that systems are reliable, unbiased, explainable and scalable.
NIST research in AI is addressing two fundamental questions:
- How should AI be tested?
- How does one determine when the results of testing are good enough?
Its approach relies on two key activities:
- Develop AI data standards and best practices, providing controlled access to data repositories so AI developers can train complex and never-before-solved AI solutions in an expanding number of domains.
- Collaborate with stakeholders to develop AI evaluation methodologies and standard testing protocols to define and execute community-based challenge problems. The challenge problems will focus on solving carefully selected tasks, foster competition among machine learning algorithm developers, and provide performance and interoperability assessment of AI-driven solutions.
The outcome of this approach will be trustworthy AI technologies tailored to high-impact domain tasks that are ready for deployment.
* Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (3rd Edition) (Essex, England: Pearson, 2009).