Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

AI Foundational Research - Explainability

A multidisciplinary team of computer scientists, cognitive scientists, mathematicians, and specialists in AI and machine learning that all have diverse background and research specialties, explore and define the core tenets of explainable AI (XAI). The team aims to develop measurement methods and best practices that support the implementation of those tenets. Ultimately, the team plans to develop a metrologist’s guide to AI systems that address the complex entanglement of terminology and taxonomy as it relates to the myriad layers of the AI field. AI must be explainable to society to enable understanding, trust, and adoption of new AI technologies, the decisions produced, or guidance provided by AI systems.

Updates and Contact information

Stay tuned for further announcements and related activity by checking this page or by subscribing to Artificial Intelligence updates through GovDelivery at https://service.govdelivery.com/accounts/USNIST/subscriber/new.

Please direct questions to explainable-AI@nist.gov.  

Workshop: Explainable AI

NIST held a virtual workshop on Explainable Artificial Intelligence (AI) on January 26-28, 2021. Explainable AI is a key element of trustworthy AI and there is significant interest in explainable AI from stakeholders, communities, and areas across this multidisciplinary field. As part of NIST’s efforts to provide foundational tools, guidance, and best practices for AI-related research, we released a draft whitepaper, Four Principles of Explainable Artificial Intelligence, for public comment. Inspired by comments received, this workshop will delve further into developing an understanding of explainable AI.

Public comment for Four Principles of Explainable Artificial Intelligence

Thank you for your interest in the first draft of Four Principles of Explainable Artificial Intelligence (NISTIR 8312-draft).

The paper presents four principles that capture the fundamental properties of explainable Artificial Intelligence (AI) systems. These principles are heavily influenced by an AI system’s interaction with the human receiving the information. The requirements of the given application, the task, and the consumer of the explanation will influence the type of explanation deemed appropriate. Our four principles are intended to capture a broad set of motivations, applications, and perspectives.  

We appreciate all those who provided comments. Your feedback is important for us to shape this work. 

The comment period for this document is now closed. 

Created April 6, 2020, Updated January 29, 2021