Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Four Principles of Explainable Artificial Intelligence (Draft)

Published

Author(s)

P J. Phillips, Amanda C. Hahn, Peter C. Fontana, David A. Broniatowski, Mark A. Przybocki

Abstract

We introduce four principles for explainable artificial intelligence (AI) that comprise the fundamental properties for explainable AI systems. They were developed to encompass the multidisciplinary nature of explainable AI, including the fields of computer science, engineering, and psychology. Because one size fits all explanations do not exist, different users will require different types of explanations. We present five categories of explanation and summarize theories of explainable AI. We give an overview of the algorithms in the field that cover the major classes of explainable algorithms. As a baseline comparison, we assess how well explanations provided by people follow our four principles. This assessment provides insights to the challenges of designing explainable AI systems.
Citation
NIST Interagency/Internal Report (NISTIR) - 8312-draft
Report Number
8312-draft

Keywords

Artificial Intelligence (AI), explainable AI, trustworthy AI

Citation

Phillips, P. , Hahn, A. , Fontana, P. , Broniatowski, D. and Przybocki, M. (2020), Four Principles of Explainable Artificial Intelligence (Draft), NIST Interagency/Internal Report (NISTIR), National Institute of Standards and Technology, Gaithersburg, MD, [online], https://doi.org/10.6028/NIST.IR.8312-draft (Accessed April 18, 2024)
Created August 17, 2020, Updated March 1, 2021