NIST works with the AI community to identify the building blocks needed to cultivate trust and ensure that AI systems are accurate, reliable, safe, secure and resilient, robust, explainable and interpretable – and that they mitigate bias while also taking into account privacy and fairness. To foster collaboration and develop a shared understanding of what constitutes trustworthy AI, NIST has been organizing a series of workshops bringing together government, industry, academia, and other stakeholders from the US and around the world. The workshops’ focus is on advancing the development of AI standards, guidelines, and related tools.
NIST relies on workshops to gather public and private stakeholder information, feedback, and perspectives. Multiple related AI workshops have been held and are being scheduled. See here for more details. For details about the most recent workshop, see: