NIST works with the AI community to identify the building blocks needed to cultivate trust and ensure that AI systems are accurate, reliable, safe, secure and resilient, robust, explainable and interpretable – and that they mitigate bias while also taking into account privacy and fairness. To foster collaboration and develop a shared understanding of what constitutes trustworthy AI, NIST has been organizing a series of workshops bringing together government, industry, academia, and other stakeholders from the US and around the world. The workshops’ focus is on advancing the development of AI standards, guidelines, and related tools.
Kicking off NIST AI Risk Management Framework: Workshop #1 was held October 19-21, 2021.
A workshop on AI Measurement and Evaluation was held June 15 - 17, 2021.
National Academy of Science, Engineering and Medicine (NASEM) workshop on Assessing and Improving AI Trustworthiness: Current Contexts, Potential Paths was held on March 3-4, 2021.
A workshop on Explainable AI was held January 26-28, 2021. A workshop report is due out shortly.