NIST contributes to the research, standards, and data required to realize the promise of AI as an enabler of American innovation across private and public sectors. To speed innovation and adoption of trustworthy AI systems, greater understanding of – and the ability to communicate about and trust in – AI systems is needed. To achieve trustworthy AI systems, stakeholders need to develop (and subsequently ensure understanding and use of) the building blocks for trustworthy AI systems and the associated measurement, methods, standards, and tools to implement those building blocks when developing, using, and overseeing AI systems. In collaboration with the AI community of developers and users, NIST is focusing on development of the needed measurements, standards, and related solutions for trustworthy AI.
NIST has been working with the AI community to identify the technical requirements needed to cultivate trust that AI systems are accurate and reliable, safe and secure, explainable, and free from bias. To foster collaboration and develop a shared understanding of what constitutes trustworthy AI, NIST is organizing a series of workshops bringing together government, industry, academia, and other stakeholders. Consistent with our mission, the workshops’ focus will be on advancing the development of AI standards, and related tools.
Confirmed workshops include: