A crucial principle, for both humans and machines, is to avoid bias in order to prevent discrimination. It is critical that AI systems are trained with data that is unbiased and using algorithms that can be explained. The purpose of this project is to understand, examine, and mitigate bias in AI systems. Past Activities
BIAS IN AI WORKSHOP was held August 18, 2020 NIST hosted a virtual workshop on August 18, 2020, to develop a shared understanding of bias in AI, what it is, and how to measure it.
NIST A.I. Reference Library Bibliography: Bias in Artificial Intelligence
NIST contributes to the research, standards, and data required to realize the full promise of artificial intelligence (AI) as an enabler of American innovation across industry and economic sectors. Working with the AI community, NIST seeks to identify the technical requirements needed to cultivate trust that AI systems are accurate and reliable, safe and secure, explainable, and free from bias. A key but still insufficiently defined building block of trustworthiness is bias in AI-based products and systems. That bias can be purposeful or inadvertent. By hosting discussions and conducting research, NIST is helping to move us closer to agreement on understanding and measuring bias in AI systems.
ai-bias [at] list.nist.gov