Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Publications

NIST Authors in Bold

Displaying 1 - 25 of 244

Reflection of its Creators: Qualitative Analysis of General Public and Expert Perceptions of Artificial Intelligence

October 16, 2024
Author(s)
Theodore Jensen, Mary Frances Theofanos, Kristen K. Greene, Olivia Williams, Kurtis Goad, Janet Bih Fofang
The increasing prevalence of artificial intelligence (AI) will likely lead to new interactions and impacts for the general public. An understanding of people's perceptions of AI can be leveraged to design and deploy AI systems toward human needs and values

An Overarching Quality Evaluation Framework for Additive Manufacturing Digital Twin

September 2, 2024
Author(s)
Yan Lu, Zhuo Yang, Shengyen Li, Yaoyao Fiona Zhao, Jiarui Xie, Mutahar Safdar, Hyunwoong Ko
The key differentiation of digital twins from existing models-based engineering approaches lies in the continuous synchronization between physical and virtual twins through data exchange. The success of digital twins, whether operated automatically or with

A Plan for Global Engagement on AI Standards

July 26, 2024
Author(s)
Jesse Dunietz, Elham Tabassi, Mark Latonero, Kamie Roberts
Recognizing the importance of technical standards in shaping development and use of Artificial Intelligence (AI), the President's October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110)

Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile

July 26, 2024
Author(s)
Chloe Autio, Reva Schwartz, Jesse Dunietz, Shomik Jain, Martin Stanley, Elham Tabassi, Patrick Hall, Kamie Roberts
This document is a cross-sectoral profile of and companion resource for the AI Risk Management Framework (AI RMF 1.0) for Generative AI, pursuant to President Biden's Executive Order (EO) 14110 on Safe, Secure, and Trustworthy Artificial Intelligence. The

Forecasting Operation of a Chiller Plant Facility Using Data Driven Models

July 23, 2024
Author(s)
Behzad Salimian Rizi, Afshin Faramarzi, Amanda Pertzborn, Mohammad Heidarinejad
In recent years, data-driven models have enabled accurate prediction of chiller power consumption and chiller coefficient of performance (COP). This study evaluates the usage of time series Extreme Gradient Boosting (XGBoost) models to predict chiller

An Adaptable AI Assistant for Network Management

July 3, 2024
Author(s)
Amar Abane, Abdella Battou, Mheni Merzouki
This paper presents a network management AI assistant built with Large Language Models. It adapts at runtime to the network state and specific platform, leveraging techniques like prompt engineering, document retrieval, and Knowledge Graph integration. The

Fiscal Year 2023 Cybersecurity and Privacy Annual Report

May 20, 2024
Author(s)
Patrick D. O'Reilly, Kristina Rigopoulos
During Fiscal Year 2023 (FY 2023) – from October 1, 2022, through September 30, 2023 –the NIST Information Technology Laboratory (ITL) Cybersecurity and Privacy Program successfully responded to numerous challenges and opportunities in security and privacy

An Adaptable AI Assistant for Network Management

April 12, 2024
Author(s)

Amar Abane, Abdella Battou, Mheni Merzouki

This paper presents a network management AI assistant built with Large Language Models. It adapts at runtime to the network state and specific platform, leveraging techniques like prompt engineering, document retrieval, and Knowledge Graph integration. The

AI Use Taxonomy: A Human-Centered Approach

March 26, 2024
Author(s)
Mary Frances Theofanos, Yee-Yin Choong, Theodore Jensen
As artificial intelligence (AI) systems continue to be developed, humans will increasingly participate in human-AI interactions. Humans interact with AI systems to achieve particular goals. To ensure that AI systems contribute positively to human-AI

Improving the TENOR of Labeling: Re-evaluating Topic Models for Content Analysis

March 23, 2024
Author(s)
Zongxia Li, Andrew Mao, Jordan Boyd-Graber, Daniel Stephens, Emily Walpole, Alden A. Dima, Juan Fung
Topic models are a popular tool for understanding text collections, but their evaluation has been a point of contention. Automated evaluation metrics such as coherence are often used, however, their validity has been questioned for neural topic models

Photonic Online Learning

January 9, 2024
Author(s)
Sonia Buckley, Adam McCaughan, Bakhrom Oripov
Training in machine learning necessarily involves more operations than inference only, with higher precision, more memory, and added computational complexity. In hardware, many implementations side-step this issue by designing "inference-only" hardware

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

January 4, 2024
Author(s)
Apostol Vassilev, Alina Oprea, Alie Fordyce, Hyrum Andersen
This NIST AI report develops a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML). The taxonomy is built on survey of the AML literature and is arranged in a conceptual hierarchy that includes key types of ML

Facilitating Stakeholder Communication around AI-Enabled Systems and Business Processes

November 21, 2023
Author(s)
Edward Griffor, Matthew Bundas, Chasity Nadeau, Jeannine Shantz, Thanh Nguyen, Marcello Balduccini, Tran Son
Artificial Intelligence (AI) is often critical to the success of modern business processes. Leveraging it, however, is non-trivial. A major hurdle is communication: discussing system requirements among stakeholders with different backgrounds and goals
Displaying 1 - 25 of 244