Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Exploring AI Trustworthiness: Workshop Series Kickoff Webinar

Good afternoon — or depending where you are connecting from, good morning or good evening!

Artificial intelligence trustworthiness is a topic very much on our minds and in the news these days. And I’m so pleased that you all have gathered, some of the top minds that are considering and working on these matters in the United States and also around the world. 

And we all know artificial intelligence is here. Today’s “AI summer” is bright and warm, thanks to powerful modern AI techniques. But they can also be brittle, and hard to understand and indeed prone to bias — it’s now time to move forward with concrete plans to address the issue of trustworthiness in AI systems, and that's why we're gathered today and throughout this series.

Moreover, we need to do so quickly. Establishing the appropriate criteria for trust in AI is critical to the next phase of technical advances in many fields ranging from transportation to energy to manufacturing.  

We all can name the many challenges that stand in the way of AI progress. And, as in many areas of science and technology development, that’s actually the relatively easy part. What we want to do here today, with the launch of this initiative and in the workshops that will follow, is to tackle the difficult part. 

We want to find consensus on the concrete ways to demonstrate that an AI system works accurately, safely and reliably as intended. In fact, I hope we’ll remember August 2020 as critical to our quest to trustworthy AI — the starting point for assigning generally agreed upon attributes of trustworthiness to AI components and systems. 

As one who remembers, this may actually be our AI Woodstock moment — but experienced virtually! 

And just like those legions of young free spirits at Woodstock back in the 60s, and August 1969 to be exact, our goals are mostly aspirational at this point. We all know where we’d like AI trustworthiness to end up, and we have many principles to get there, but we do not yet no clear roadmap to success.

As our name clearly conveys, the National Institute of Standards and Technology is all about standards and technology. And when it comes to AI, we, and the much larger community of AI innovators, providers and users, need both standards and technology. 

Standards even the playing field so that innovative technologies can compete fairly with established ones in a trusted basis. And we know from our work on smart grids, IoT, robotics and many other technology areas that it’s critical for an emerging field to be able to agree on common terms. We need a common language, a common taxonomy, a set of terms that describe AI trustworthiness and ultimately how to test for it, how do we measure it.

Principles for Trustworthy AI

To start, we need to work from a basic, widely agreed upon set of principles. Fortunately, there’s been great progress in developing such principles in the United States and with our colleagues from around the world.

Several of our speakers on the panel which follows will tell you about this in some detail. 

Dr. Lynne Parker, Deputy Chief Technology Officer at the White House, will share the latest thinking about principles for AI trustworthiness, how to go from principles to practice, and the importance of understanding technical requirements in policymaking.

Of course, even when we do arrive at agreement on those principles, we have solved only part of the puzzle. Our definition of trustworthiness can’t be, “We’ll know it when we see it.” To truly advance the field, we must be very clear and specific in describing what trustworthy AI truly is.  

To paraphrase an icon of science, Lord Kelvin is frequently quoted — and not quite correctly, saying — “To measure is to know” — “If you can’t measure something, you don’t really understand it and, consequently, you can’t improve it.” 

For our topic today, if we can’t describe the building blocks, the prerequisites for ensuring AI trustworthiness principles, then those important principles become practically useless. 

It’s long been a principle of organizational leadership that what you measure is what you focus upon — to deliver meaningful results. And so we are today on the path to define these building blocks in terms that are ultimately measurable.

Building Blocks for Trustworthy AI

So how do we do that? In short, by getting help from all of you! 

NIST’s effort in helping to identify, shape and develop those foundational building blocks will be informed by a series of workshops that will follow this kickoff session. 

NIST has a long tradition of cultivating trust in technology by involving the private and public sectors right from the start. To get things rolling, I’d like to offer an initial way to approach this tough task of identifying some of the essential building blocks.

First, accuracy. AI systems must be accurate, effective and fit for the intended purpose.

Second, safety. AI systems must avoid unintended and harmful behavior that may result from poor design or use.

Next, reliability. AI systems must reliably do things that they are intended and designed to do, or expected to do.

Resilience is the next building block. AI systems must be resilient to vulnerabilities, adversarial attacks and other malicious manipulations.

That’s followed perhaps by the most talked-about characteristic of trustworthy AI: avoiding biases. AI systems must be designed and used in ways that ensure errors do not disproportionately affect certain groups of people.

Explainability is another building block. AI systems and their outcomes must be sufficiently interpretable and understandable by mere mortals, the people who use them. 

And finally, privacy. AI systems must protect individual’s privacy and data.

Measurements, Standards, Evaluations … and Related Tools

Moreover, for each and every one of these building blocks that we agree upon, we need to have in place associated measurements, standards, evaluations and relevant tools. 

AI systems must be regularly monitored to ensure that they perform as they should. And the whole discipline needs regular benchmarks and evaluations to examine its limitations and its capabilities. These are absolutely essential elements — and must be in hand when innovators and organizations develop, use and test AI systems.

NIST is working hard on those measurements, standards, evaluations and related tools. We are doing that as a collaborative enterprise, working with companies, universities, other government agencies and nonprofit organizations. 

We’re also reaching out to the broader public, as we invite everyone, stakeholders, to join us under the tent for AI's future, so to speak. Collaboration is going to be absolutely essential to our shared success in advancing trustworthy AI in thoughtful, inclusive and practical ways in the United States and around the world.

At NIST, we are doing that — in our laboratories, in this special series of workshops, in standards development settings, and in formal and informal dialogues here in the U.S., and across the globe.

I'd like to share with you a few examples of recent NIST AI trustworthiness efforts:

First, there are several longtime research programs at NIST and a number of other federal agencies. Topics include search and summarization, language understanding, language translation, biometrics pattern recognition, video analytics, new materials discovery and robotics. In all these fields, we are both directly improving AI technologies and generating data for AI benchmarking and for the standards-setting process. 

Highly visible is NIST's work in facial recognition and the impacts of mask utilization on the effectiveness of facial recognition algorithms and systems.

In the field of data science, NIST has created a series of shared datasets and environments spanning robotics, materials discovery and optimization, wireless spectrum analysis and semantic annotation, to name a few. As you know, the MNIST datasets have been extensively used in AI research over its 20-plus year existence. MNIST actually is now part of the tool kit supporting development of quantum computing a quantum hybrid processors.  

The NIST National Cybersecurity Center of Excellence (NCCoE) recently published "A Taxonomy and Terminology of Adversarial Machine Learning." It's out for public comment as a step toward securing applications of artificial intelligence. 

NIST and its NCCoE are working to launch a testbed now to evaluate AI vulnerabilities.

To develop a shared and measurable understanding of bias in AI that can guide and speed the innovation and adoption of trustworthy AI systems, NIST is next hosting a workshop on August 18 to tackle this important and challenging issue.

NIST also supports the development and adoption of international technical standards for AI. 

In August 2019, NIST released a plan for prioritizing U.S. federal agency engagement in the development of standards for artificial intelligence per the president's February 2019 Executive Order on Maintaining American Leadership on Artificial Intelligence (EO 13859). 

NIST’s work is truly being carried out in the spirit of inclusiveness and with a sense of urgency. 

I do, however, want to offer a word of caution. While we can and will make some very important advances on these measurements, standards, evaluations and tools, we need to build on a solid foundation of well-defined building blocks that are accepted by the AI community. 

For example, we know that developing standards and deploying them too soon can actually hinder AI innovation. Developing elaborate and complex evaluation and conformity assessment programs and doing that too soon and implementing it too fast  may result in wasted resources and ill-placed user confidence. We need to keep this in mind — and we need to beware also of seemingly simple solutions in the name of expediency. We have to take the long view.

I look forward to hearing from today’s panel about principles and, especially, to get their views on how to go from principles to practice. We also want to hear from everyone joining us online, both in real time and in the follow up documents and workshops that we'll issue. We expect to post documents for public comment and look forward to your input to ensure that those are what is needed.

Working together to develop this critical foundation, these building blocks for trustworthy AI, we will together ensure a bright future not only for artificial intelligence for its technical and economic use, but also for the countless innovations that will depend on it.

Again, welcome to virtual Woodstock. Thank you for being a part of it.

Created August 6, 2020, Updated August 7, 2020