We are seeking your feedback on our recently released first draft document of “A Proposal for Identifying and Managing Bias in Artificial Intelligence" (Special Publication 1270).
The proliferation of modeling and predictive approaches based on data-driven and machine learning techniques has helped to expose various social biases baked into real-world systems. These biases and other related inaccuracies in automated systems can lead to harmful outcomes that chip away at public trust in technology. The paper proposes an approach for identifying and managing AI bias that is tied to three stages of the AI lifecycle, 1) pre-design, 2) design and development, and 3) deployment (and post-deployment factors). This approach is intended to enable AI designers and deployers to better relate specific lifecycle processes with the types of AI bias, and facilitate more effective management of it. This proposal is part of NIST’s broader work in developing a risk management framework for Trustworthy and Responsible AI.
Organizations are encouraged to review the draft and provide feedback before the public comment period closes. The comment period has been extended to September 10, 2021. The NIST authors will review and adjudicate all comments received in anticipation of follow-on documents and public events in this topic area.
All interested parties are encouraged to submit comments about this draft report. You may submit comments by sending them to this email (ai-bias [at] list.nist.gov). Using this template is preferred but not necessary, please feel free to provide your feedback in your preferred form of document. Anonymous comments are accepted, although including your name and contact information will enable the authors to contact you for clarification, if necessary. Please note that all comments received are subject to release under the Freedom of Information Act. Please do not submit confidential business information or otherwise sensitive or protected information.