Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Description

AI User Trust Icon

This project will investigate how to foster trust in AI with the US general public. Currently there is no method to measure user trust in AI or measure what factors influence the users’ trust decisions. Providing trustworthy values about the AIs’ characteristics is a start but being trustworthy and being trusted are two different things. The AI user can be informed that the characteristic values can be trusted to be valid, but the AI user trust decision is more complicated. User trust depends on the context in which the AI system is used. Factors such as need and risk play a part in whether the AI user will trust the AI.

The AI user trust decision, as other human trust decisions, is a psychological process. It will take a team that is well versed in the psychological decision-making process along with other cognitive processes.

Updates and Contact Information

Stay tuned for further announcements and related activity by checking this page or by subscribing to Artificial Intelligence updates through GovDelivery at https://public.govdelivery.com/accounts/USNIST/subscriber/new.

Please direct questions to  aiusertrustcomments [at] nist.gov (aiusertrustcomments[at]nist[dot]gov)

Public Comment for Trust and Artificial Intelligence

We are seeking your feedback on our recently released first draft project description of User Trust and Artificial Intelligence (NISTIR 8332-draft). 

The paper presents a model of the factors in an AI user’s decision on how much to trust an AI system, and includes a review of previous research that informed the model. The model of AI user trust is presented as an equation in order to illustrate contributions from various factors to the user’s perceived trustworthiness. It is our hope that further research of AI user trust factors will answer the question of measurement precision regarding user trust. 

Organizations are encouraged to review the draft and provide feedback for possible incorporation into the project description before the public comment period closes on July 30, 2021. The NIST authors will review and adjudicate all comments received before publishing the final version. We appreciate you taking the time to read through and provide feedback to help NIST refine this project.

You may submit comments by sending them to this email: aiusertrustcomments [at] nist.gov (aiusertrustcomments[at]nist[dot]gov). Using this template is preferred but not necessary. Anonymous comments are accepted, although including your name and contact information will enable the authors to contact you for clarification, if necessary. Please note that all comments received are subject to release under the Freedom of Information Act. Please do not submit confidential business information or otherwise sensitive or protected information.

Created April 12, 2021, Updated September 8, 2022