Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

AI Hero Image

Artificial intelligence


NIST aims to cultivate trust in the design, development, use and governance of Artificial Intelligence (AI) technologies and systems in ways that enhance safety and security and improve quality of life. NIST focuses on improving measurement science, technology, standards and related tools — including evaluation and data.

With AI and Machine Learning (ML) changing how society addresses challenges and opportunities, the trustworthiness of AI technologies is critical. Trustworthy AI systems are those demonstrated to be valid and reliable; safe, secure and resilient; accountable and transparent; explainable and interpretable; privacy-enhanced; and fair with harmful bias managed. The agency’s AI goals and activities are driven by its statutory mandates, Presidential Executive Orders and policies, and the needs expressed by U.S. industry, the global research community, other federal agencies,and civil society.

On October 30, 2023, President Biden signed an Executive Order (EO) to build U.S. capacity to evaluate and mitigate the risks of AI systems to ensure safety, security and trust, while promoting an innovative, competitive AI ecosystem that supports workers and protects consumers. Learn more about NIST's responsibilities in the EO and the creation of the U.S. Artificial Intelligence Safety Institute, including the new consortium that is being established.

NIST’s AI goals include:

  1. Conduct fundamental research to advance trustworthy AI technologies.
  2. Apply AI research and innovation across the NIST Laboratory Programs.
  3. Establish benchmarks, data and metrics to evaluate AI technologies.
  4. Lead and participate in development of technical AI standards.
  5. Contribute technical expertise to discussions and development of AI policies.

NIST’s AI efforts fall in several categories:

NIST’s AI portfolio includes fundamental research to advance the development of AI technologies — including software, hardware, architectures and the ways humans interact with AI technology and AI-generated information  

AI approaches are increasingly an essential component in new research. NIST scientists and engineers use various machine learning and AI tools to gain a deeper understanding of and insight into their research. At the same time, NIST laboratory experiences with AI are leading to a better understanding of AI’s capabilities and limitations.

With a long history of working with the community to advance tools, standards and test beds, NIST increasingly is focusing on the sociotechnical evaluation of AI.  

NIST leads and participates in the development of technical standards, including international standards, that promote innovation and public trust in systems that use AI. A broad spectrum of standards for AI data, performance and governance are a priority for the use and creation of trustworthy and responsible AI.

A fact sheet describes NIST's AI programs.

Stay in Touch

Sign up for our newsletter to stay up to date with the latest research, trends, and news for Artificial intelligence.

The Research

Projects & Programs


JARVIS-ML is a repository of machine learning (ML) model parameters, descriptors, and ML related input and target data. JARVIS-ML is a part of the NIST-JARVIS

Additional Resources Links


Minimizing Harms and Maximizing the Potential of Generative AI

As generative AI tools like ChatGPT become more commonly used, we must think carefully about the impact on people and society.

NIST Publishes Automated Vehicles Workshop Report

NIST Industrial Wireless Team Leads Special Session on Performance Assurance of Industrial Wireless Systems

NIST Makes Significant Contributions to International Workshop on Advancing Healthcare Through Innovative Technologies

Bias in AI
Bias in AI
NIST contributes to the research, standards, and data required to realize the full promise of artificial intelligence (AI) as an enabler of American innovation across industry and economic sectors. Working with the AI community, NIST seeks to identify the technical requirements needed to cultivate trust that AI systems are accurate and reliable, safe and secure, explainable, and free from bias. A key but still insufficiently defined building block of trustworthiness is bias in AI-based products and systems. That bias can be purposeful or inadvertent. By hosting discussions and conducting research, NIST is helping to move us closer to agreement on understanding and measuring bias in AI systems.
Psychology of Interpretable and Explainable AI
Psychology of Interpretable and Explainable AI
 The purpose of this pre-recorded webinar, is to promote and more broadly share the release of NIST IR-8367, "Psychological Foundations of Explainability and Interpretability in Artificial Intelligence." This is a pre-recorded interview between the author of the paper, Dr. David Broniatowski, and a member of the NIST ITL team, Natasha Bansgopaul, asking key questions to highlight important insights from the paper that was published in April 2021.


AI Metrology Colloquia Series

Thu, Jul 18 2024, 12:00 - 1:00pm EDT
As a follow-on to the National Academies of Science, Engineering, and Medicine workshop on Assessing and Improving AI