Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Description

The Human-Centered Artificial Intelligence (AI) program within the NIST Visualization and Usability Group performs research across several AI areas, always keeping humans at the core of our work. Our projects include:

AI Perceptions

The general public is increasingly interacting with or impacted by AI in various domains. An understanding of perceptions of AI among the general public can help to guide research in human-AI interaction, as well as support the development of human-centered AI systems. 

The Visualization and Usability Group conducted semi-structured interviews with 25 members of the U.S. general public and 20 AI experts who work in U.S. industry. Interview transcripts were qualitatively analyzed with the goal of identifying perceptions and beliefs about AI among the two groups.

Qualitative analysis revealed that humanness and ethics were central components of participants’ perceptions of AI. Humanness, the set of traits considered to set humans apart from other intelligent actors, was a consistent component of various beliefs about AI’s characteristics. Ethics arose in participant’s discussions of the role of technology in society, centering around views of AI as made and used by people. These findings point to various beliefs and concerns among the public and experts warranting future focused research on perceptions of AI.

AI Ethics

Although the benefits of AI to society are potentially transformative, many fear the cost to human rights may be too great without focused attention on ethical considerations. Our team collaborated with NIST’s Research Protections Office to investigate application of the Belmont Principles to ethical AI research. 

The Belmont Principles identify three basic ethical principles for the protection of human subjects:

  1. Respect for persons: Individuals should be treated as autonomous agents. Persons with diminished autonomy are entitled to protection.
  2. Beneficence: Do no harm. Maximize benefit and minimize risk.
  3. Justice: The benefits and risks shared by a population that may benefit from the results of research.

See our recent NIST news story and IEEE Computer publication.

AI Use Taxonomy

The advancement of AI technologies across a variety of domains has spurred efforts in measurement and evaluation aiming to ensure that systems are trustworthy and responsible. AI systems are often categorized by technique or domain of application, which may limit the ability to develop measurement and evaluation approaches that apply to a broad range of systems. The Visualization and Usability group developed the AI Use Taxonomy to categorize AI systems in a way that is 1) technique-independent, 2) domain-independent, and 3) human-centered. The taxonomy sets forward 16 human-activities which describe the ways that an AI system may contribute to a human’s overall task and intended outcomes. The motivation and approach to developing the taxonomy are described here: https://doi.org/10.6028/NIST.AI.200-1.

The taxonomy can be applied to task analysis of specific AI use cases as well as to categorize the different ways AI is used within an organization. The research team continues to work with various government entities to improve the utility of the taxonomy and assist with categorizing AI use cases from a human-centered perspective, focused on overall tasks and human goals.

AI User Trust Measurement

Trust has been identified as an influential predictor of operator behavior in human-automation interaction. Research on trust in automation has been advanced via the development and validation of psychometric scales which capture the subjective construct of trust. Trust measurement can help to clarify the effects of user-, system-, and context-related variables in user perceptions of and behavior with technology.

Recent efforts in trustworthy and responsible AI emphasize the importance of sociotechnical approaches which account for the subjective experiences of those using or impacted by AI systems. Toward better understanding user trust in AI systems, the Visualization and Usability Group sought to validate existing psychometric scales for trust in automation in the AI context. We conducted an online study with a sample of federal employees and a sample of individuals in the general public where trust was assessed with one of two trust in automation scales. Participants were asked to imagine themselves interacting with AI systems in various contexts and report on their trust in addition to other trust-relevant variables.

Resources

Created April 23, 2024, Updated April 30, 2024