We are seeking your feedback on our recently released first draft document of “A Proposal for Identifying and Managing Bias in Artificial Intelligence" (Special Publication 1270).
The proliferation of modeling and predictive approaches based on data-driven and machine learning techniques has helped to expose various social biases baked into real-world systems. These biases and other related inaccuracies in automated systems can lead to harmful outcomes that chip away at public trust in technology. The paper proposes an approach for identifying and managing AI bias that is tied to three stages of the AI lifecycle, 1) pre-design, 2) design and development, and 3) deployment (and post-deployment factors). This approach is intended to enable AI designers and deployers to better relate specific lifecycle processes with the types of AI bias, and facilitate more effective management of it. This proposal is part of NIST’s broader work in developing a risk management framework for Trustworthy and Responsible AI.
The public comment period for this draft document has closed. We want to thank all of the individuals and organizations for taking the time to provide us with insights and feedback in this challenging area. We will review and synthesize this input as we develop the next version of SP1270. Starting in 2022, through a series of public workshops, we plan to collaboratively develop detailed technical guidance for the key areas described in our document and the submitted comments. Please look for announcements about these activities.