NIST works with the AI community to identify the building blocks needed to cultivate trust and ensure that AI systems are accurate, reliable, safe, secure and resilient, robust, explainable and interpretable – and that they mitigate bias while also taking into account privacy and fairness. To foster collaboration and develop a shared understanding of what constitutes trustworthy AI, NIST has been organizing a series of workshops bringing together government, industry, academia, and other stakeholders from the US and around the world. The workshops’ focus is on advancing the development of AI standards, guidelines, and related tools.
Launching Publication of the AI Risk Management Framework (AI RMF) 1.0 – January 26, 2023
Building the NIST AI Risk Management Framework: Workshop #3– October 18-19, 2022
Artificial Intelligence and the Economy Conference - April 27, 2022
Kicking off NIST AI Risk Management Framework: Workshop #1 was held October 19-21, 2021.
A workshop on AI Measurement and Evaluation was held June 15 - 17, 2021.
National Academy of Science, Engineering and Medicine (NASEM) workshop on Assessing and Improving AI Trustworthiness: Current Contexts, Potential Paths was held on March 3-4, 2021.
A workshop on Explainable AI was held January 26-28, 2021. A workshop report is due out shortly.
A workshop on Bias in AI was held on August 18, 2020. A draft report which includes information about discussions has been published. A recording of this event can be found on the event page.
The kickoff AI Workshop, Exploring AI Trustworthiness, took place on August 6, 2020. A report on the discussions is due out shortly. A recording of the workshop can be found on the event page.