Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Proposal for Identifying and Managing Bias in Artificial Intelligence (SP 1270)


In June 2021, NIST requested comments on a draft report, “A Proposal for Identifying and Managing Bias in Artificial Intelligence" (Special Publication 1270).

"The proliferation of modeling and predictive approaches based on data-driven and machine learning techniques has helped to expose various social biases baked into real-world systems. These biases and other related inaccuracies in automated systems can lead to harmful outcomes that chip away at public trust in technology. The paper proposes an approach for identifying and managing AI bias that is tied to three stages of the AI lifecycle, 1) pre-design, 2) design and development, and 3) deployment (and post-deployment factors). This approach is intended to enable AI designers and deployers to better relate specific lifecycle processes with the types of AI bias, and facilitate more effective management of it. This proposal is part of NIST’s broader work in developing a risk management framework for Trustworthy and Responsible AI."

NIST thanks all of the individuals and organizations for taking the time to provide us with insights and feedback in this challenging area. Those comments helped in developing a final version of the publication, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270), which was issued in March 2022 and is the subject of multiple workshops held by NIST. This feedback also is contributing to the development of the NIST AI Risk Management Framework.


Created June 22, 2021, Updated April 5, 2022