A crucial principle, for both humans and machines, is to avoid bias and therefore prevent discrimination. As NIST works to develop AI systems that can be trusted, it is critical to develop and train these systems with data that is unbiased and to develop algorithms that can be explained. The purpose of this project is to understand, examine, and mitigate bias in AI systems.
There are two upcoming opportunities to participate in the project:
To update and expand on NIST’s expertise in AI research to critically evaluate recent articles in academic literature and other prominent reports regarding bias in AI, ITL has started a new virtual journal reading club on the topic of bias in AI. The journal club has a particular interest in understanding how bias in AI is defined and how it affects the AI development lifecycle. We are looking for volunteers to read, review, summarize, and reflect on 2-4 articles within a group discussion. This group is open to anyone who has an interest in bias in AI or AI research generally. The first virtual reading sessions is planned for June 10, 2020.
To become a member of the virtual journal reading club or for more information, please RSVP by sending an email to firstname.lastname@example.org.