Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Taking Measure

Just a Standard Blog

Minimizing Harms and Maximizing the Potential of Generative AI

In front of a laptop computer, a hand holds a cell phone that has a conversation with generative AI on the phone screen.
ChatGPT and other similar generative AI products use the internet to generate text (or other content) in response to your queries. Just as you probably tell your children not to believe everything they see on the internet, the same rules apply in generative AI.
Credit: Iryna Imago/ShutterStock

When social media platforms were first created, some companies had lofty goals of bringing people together. 

To some extent, they succeeded. Social media has allowed people to connect. But it has also led to hate speech, violence, bullying, self-esteem issues in teenagers and other harms. 

Decades into the social media era, it’s clear that new technologies come with both upsides and downsides. 

Now, with the rapid growth of tools such as ChatGPT, Bing Chat, Bard and others, we have a chance to be more intentional at the outset. These tools are called generative artificial intelligence (AI) because they respond to user input or requests, such as questions, by predicting and generating content through the use of deep learning algorithms. 

If you’ve used one of these tools to help you plan a trip or choose a dinner recipe, you may have found that moment just as exciting as discovering a new social media platform. 

But maybe you also became concerned about the societal implications of this technology, such as job insecurity  or misinformation. We have to be mindful and realistic about both the tremendous potential and frightening possibilities of this technology. 

That’s why at NIST, we’re working with the technology community to put safeguards in place around all types of AI — not just the programs that generate text and images — so we can be more thoughtful about the impact AI will have on our society.     

AI Can Change Our World for the Better, if We Approach It Thoughtfully 

It may seem like generative AI came out of nowhere. But at NIST, we’ve been studying and thinking about AI for years. I’ve been studying machine learning for more than 20 years, with a specific focus on trustworthy AI — techniques to make sure that AI does not contribute to negative impacts on people and society. 

Elham Tabassi
Elham Tabassi
Credit: NIST/Brandon Hayes

More than a year ago, NIST started working with the AI community on a voluntary AI Risk Management Framework to help technology companies think through the ramifications of the products they are creating or launching. Our goal is to help society benefit from AI technologies, while protecting people from its harms. 

We worked with tech companies, consumers, advocacy groups, legal scholars, sociologists and a host of other experts to think about the potential negative consequences of AI and how we can address them now — while the technology is in its relative infancy. 

Thinking about how to test an AI system for not only whether it works, but also the effect it might have on individuals, communities and society — known as a socio-technical approach — is new for NIST and the research community. 

We published the framework in January, and now we are working on benchmarks and testing approaches to measure the trustworthiness of AI technologies. 

Since then, we’ve convened working groups to produce additional guidelines on a variety of generative AI-related areas. The guidelines we’re developing cover topics such as testing for language models, how companies should report a cyber incident, and how the public can know if a photo or video online is authentic. 

For example, we know from the emerging trend of deepfake photos and videos that we won’t win a “cat and mouse” game with that content. What if, instead, we had some sort of authenticity marking, so you knew a photo or video you found was legitimate? It might operate in a similar way to how some social media companies “verify” a well-known person’s account. If we could somehow mark authenticity and get people into the habit of looking for that authenticity, we could potentially help minimize the impact of deepfakes. 

Additionally, testing for the large language models these tools use to answer your questions is a significant challenge. If a tech company says, “Trust us, we tested this language model,” how do we know they really tested it to the best possible standards? We want to work toward an industry norm of accepted best practices that every platform can use to test its language models. If everyone follows the same testing protocols, the public will have more confidence in that language model’s output, and the tech community can collectively figure out where the gaps and challenges are in their language models. 

These are the types of issues we’re trying to get ahead of. 

We know this technology is going to continue to evolve faster than policy can, so we’re working to come up with a comprehensive set of guidelines that can be flexible enough to evolve as the technology changes. 

The technology companies creating AI products are a partner in this process, and many of them have already agreed to abide by our guidelines voluntarily. These companies have an interest in their products being trustworthy (and being seen as such by the public), so they’ve been willing partners in these efforts. 

If We Want AI to Be Trustworthy, We Have to Measure It 

Lord Kelvin said that if you cannot measure it, you cannot improve it. That’s how we are approaching the next phase of AI here at NIST. 

Earlier in my NIST career, I developed an approach for measuring fingerprint image quality, which was selected as an international standard. While a technically challenging problem, I approached it as just that — a technological problem with a technical solution. 

Elham Tabassi demonstrates a fingerprint scanner in her office at NIST.
 Elham Tabassi previously developed an international standard for measuring fingerprint image quality.
Credit: Rich Press/NIST

I have learned that we can’t just look at AI systems through a technical or computational lens. Generative AI, in particular, is a complex mix of data and computational algorithms, along with the humans and the environment that they all interact within. That’s why we have to look at the positive or negative consequences and risks of these systems. 

We should not just build something because we can — something technologists sometimes do. We have to think about the impacts we may be creating. That’s how we’ve studied AI so far and how we will continue to look at AI and its potential positive or negative effects. We’re asking not just, does this AI tool work accurately? We also want to know how it might impact people. How can everyone benefit? How can we be sure everyone is part of the solution? These questions make our work a lot more human-centered. 

The AI Future Is Full of Possibilities 

What excites me about AI is its enormous beneficial impact on society — it can make our lives better when it works for everyone. When it comes to generative AI, we have a chance to do better than we did with social media. We can really think through the ramifications and put people at the center of these advancements.

Right now, generative AI is mostly a supplement to human judgment or a fun way to write jokes. In the future, it may help your doctor with diagnoses or have similar usefulness to people in their daily lives. 

There’s no need to fear the AI present (and future). But it’s OK to approach it with a healthy dose of caution. My colleagues at NIST and I are working to make sure the technology works for people, not the other way around. 

About the author

Elham Tabassi

Elham Tabassi is NIST chief AI advisor and the associate director for emerging technologies in NIST’s Information Technology Laboratory. She also leads NIST’s Trustworthy and Responsible AI program that cultivates the development and deployment of safe, secure, and trustworthy AI systems. She was named one of Time’s 100 Most Influential People in Artificial Intelligence in September 2023. Tabassi has been working on various machine learning and computer vision research projects with applications in biometrics evaluation and standards since she joined NIST in 1999.

Related posts

Comments

Me gusta mucho dibujar y hacer poesía con IA , Bing me divierte mucho además de ayudarme a realizar las cosas que mi mente imagina. Soy piscis y creo que por eso somos compatibles en ese aspecto.

Google translation of this comment:

I really like drawing and making poetry with AI, Bing amuses me a lot as well as helping me do the things my mind imagines. I'm a Pisces and I think that's why we are compatible in that regard.

Yes, this is needed. Infact, we need a system o tool which can differentiate between ai and hi(human intelligence). Otherwise, there would be a big chaos.

Add new comment

CAPTCHA
Image CAPTCHA
Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Please be respectful when posting comments. We will post all comments without editing as long as they are appropriate for a public, family friendly website, are on topic and do not contain profanity, personal attacks, misleading or false information/accusations or promote specific commercial products, services or organizations. Comments that violate our comment policy or include links to non-government organizations/web pages will not be posted.