Let me start by expressing my thanks and gratitude to our speakers today, for great discussions and for their support throughout the process of developing this framework – and what an extraordinary journey it has been for the past 18 months! We are extremely grateful to everyone who helped us along the way. Your contributions were essential to completing AI RMF 1.0.
We were reminded by our esteemed speakers that with the advent of AI and machine learning, today’s machines are engineered for complex decision-making that historically only people could handle. We no longer just demand equity and fairness from each other; we demand it from our machines.
We no longer only focus on whether technology works, but also on how it works. Where, how, and by whom is the technology used? Who is left out, and why? Who is adversely impacted and why does that impact happen? How can we cultivate trust in these technologies and managing their risks?
Aware of the global nature of technologies and the evolving policy landscape across borders, we worked with the whole AI community, in the US and abroad, to provide scalable, research-based methods to manage AI risks and advance trustworthy approaches to AI that serve all people in responsible, equitable, and beneficial ways.
Today, we recognize an important milestone - through a collaborative, consensus driven, open and transparent approach, with tremendous contributions and engagement from industry, academia, civil society, and our colleagues across government and around the globe, we completed the AI RMF 1.0.
A voluntary framework for AI risk management in a flexible, structured, measurable way. Flexible to allow innovation and measurable because if you cannot measure it, you cannot improve it.
The AI RMF adopts a rights-preserving approach to AI. It outlines a process to address traditional measures of accuracy, robustness, and reliability. And importantly acknowledges that sociotechnical characteristics of the system – characteristics such as privacy, interpretability, safety and bias, which are inextricably tied to human and social behavior – are equally important when evaluating the overall risk of a system. These characteristics involve human judgment and cannot be reduced to a single threshold or metric.
This means thinking differently about measurement and standards.
This means introducing sociotechnical characteristics into standards and measures.
It means recognizing that AI technologies are not just about the data and algorithms and computation. It's also about how it is used and how it affects people in the real world.
That's what the AI RMF – and our AI work at NIST more broadly – is about: cultivating trust in the design, development, deployment and use of AI technologies and systems.
To make that vision a reality we look forward to seeing how organizations implement the AI RMF.
We will also continue to work with the community to conduct research and develop technically sound standards, interoperable evaluations and benchmarks, and usable practice guides.
Making that vision a reality is going to take all of us working together.
Let’s forge ahead and focus on operationalizing the AI RMF.
Let’s build test beds, benchmarks, and standards that are needed to advance trustworthy and responsible AI.
I look forward to staying connected with all of you taking part in this event in person or virtually...and continuing this important work.
Thank you.