A multidisciplinary team of computer scientists, cognitive scientists, mathematicians, and specialists in AI and machine learning that all have diverse background and research specialties, explore and define the core tenets of explainable AI (XAI). The team aims to develop measurement methods and best practices that support the implementation of those tenets. Ultimately, the team plans to develop a metrologist’s guide to AI systems that address the complex entanglement of terminology and taxonomy as it relates to the myriad layers of the AI field. AI must be explainable to society to enable understanding, trust, and adoption of new AI technologies, the decisions produced, or guidance provided by AI systems.
Thank you for your interest in the first draft of Four Principles of Explainable Artificial Intelligence (NISTIR 8312-draft). The paper presents four principles that capture the fundamental properties of explainable Artificial Intelligence (AI) systems. These principles are heavily influenced by an AI system’s interaction with the human receiving the information. The requirements of the given application, the task, and the consumer of the explanation will influence the type of explanation deemed appropriate. Our four principles are intended to capture a broad set of motivations, applications, and perspectives.
To characterize the diverse methods, applications, and perspectives of explainable AI, we were seeking comments on our white paper. The comment period for this draft was August 17, 2020 through October 15, 2020 and is now closed.
Stay tuned for further announcements and related activity by checking this page or by subscribing to Artificial Intelligence updates through GovDelivery at https://service.govdelivery.com/accounts/USNIST/subscriber/new.
We appreciate all those who provided comments. Your feedback is important for us to shape this work.
Please direct questions to explainable-AI@nist.gov.
The comment period for this document is now closed.