Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

AI Research - Foundational

As a non-regulatory research organization, NIST cultivates trust in AI-related research by using rigorous methods and decades-long experience to develop tools, technical guidance, and best practice guides, that accurately measure and understand the capabilities and limitations of AI technologies and the underlying data that catalyzes them. NIST’s AI research efforts can be broken down into two categories:  Foundational and Applied Research.

Foundational Research on AI Systems

NIST foundational research in AI aims to build trust in AI systems by improving their accuracy, reliability, security, robustness, and explainability, and understanding the theoretical capabilities and limitations of AI. Major NIST AI research initiatives planned in this area include: 

  • Safety and Security 
    The National Cybersecurity Center of Excellence (NCCoE) is working on a project on securing AI.
    • In October 2019, the NCCoE published draft NISTIR 8269, A Taxonomy and Terminology of Adversarial Machine Learning, as a step toward securing applications of AI, specifically Adversarial Machine Learning (AML), and features a taxonomy of concepts and terminologies. This NISTIR can inform future standards and best practices for assessing and managing ML security by establishing a common language and understanding of the rapidly developing AML landscape. Public Comment Period is now Closed.
  • Explainability
    A multidisciplinary team of computer scientists, cognitive scientists, mathematicians, and specialists in AI and machine learning that all have diverse background and research specialties, explore and define the core tenets of explainable AI. The team aims to develop measurement methods and best practices that support the implementation of those tenets. Ultimately, the team plans to develop a metrologist’s guide to AI systems that address the complex entanglement of terminology and taxonomy as it relates to the myriad layers of the AI field. AI must be explainable to society to enable understanding, trust, and adoption of new AI technologies, the decisions produced, or guidance provided by AI systems.
  • Free from Bias
    A crucial principle, for both humans and machines, is to avoid bias and therefore prevent discrimination. As NIST works to develop AI systems that can be trusted, it is critical to develop and train these systems with data that is unbiased and to develop algorithms that can be explained. The purpose of this project is to understand, examine, and mitigate bias in AI systems.

See also: AI Research - Applied

Created November 9, 2018, Updated March 31, 2020