Skip to main content

NOTICE: Due to a lapse in annual appropriations, most of this website is not being updated. Learn more.

Form submissions will still be accepted but will not receive responses at this time. Sections of this site for programs using non-appropriated funds (such as NVLAP) or those that are excepted from the shutdown (such as CHIPS and NVD) will continue to be updated.

U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Publications

Search Title, Abstract, Conference, Citation, Keyword or Author
  • Published Date
Displaying 26 - 50 of 260

A Plan for Global Engagement on AI Standards

July 26, 2024
Author(s)
Jesse Dunietz, Elham Tabassi, Mark Latonero, Kamie Roberts
Recognizing the importance of technical standards in shaping development and use of Artificial Intelligence (AI), the President's October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110)

Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile

July 26, 2024
Author(s)
Chloe Autio, Reva Schwartz, Jesse Dunietz, Shomik Jain, Martin Stanley, Elham Tabassi, Patrick Hall, Kamie Roberts
This document is a cross-sectoral profile of and companion resource for the AI Risk Management Framework (AI RMF 1.0) for Generative AI, pursuant to President Biden's Executive Order (EO) 14110 on Safe, Secure, and Trustworthy Artificial Intelligence. The

Forecasting Operation of a Chiller Plant Facility Using Data Driven Models

July 23, 2024
Author(s)
Behzad Salimian Rizi, Afshin Faramarzi, Amanda Pertzborn, Mohammad Heidarinejad
In recent years, data-driven models have enabled accurate prediction of chiller power consumption and chiller coefficient of performance (COP). This study evaluates the usage of time series Extreme Gradient Boosting (XGBoost) models to predict chiller

An Adaptable AI Assistant for Network Management

July 3, 2024
Author(s)
Amar Abane, Abdella Battou, Mheni Merzouki
This paper presents a network management AI assistant built with Large Language Models. It adapts at runtime to the network state and specific platform, leveraging techniques like prompt engineering, document retrieval, and Knowledge Graph integration. The

Fiscal Year 2023 Cybersecurity and Privacy Annual Report

May 20, 2024
Author(s)
Patrick D. O'Reilly, Kristina Rigopoulos
During Fiscal Year 2023 (FY 2023) – from October 1, 2022, through September 30, 2023 –the NIST Information Technology Laboratory (ITL) Cybersecurity and Privacy Program successfully responded to numerous challenges and opportunities in security and privacy

AI-Based Environment Segmentation Using a Context-Aware Channel Sounder

April 26, 2024
Author(s)
Anuraag Bodi, Samuel Berweger, Raied Caromi, Jihoon Bang, Jelena Senic, Camillo Gentile
We describe how the data acquired from the camera and Lidar systems of our context-aware radio-frequency (RF) channel sounder is used to reconstruct a 3D mesh of the surrounding environment, segmented and classified into discrete objects. First, the images

Context-Aware Channel Sounder for AI-Assisted Radio-Frequency Channel Modeling

April 26, 2024
Author(s)
Camillo Gentile, Jelena Senic, Anuraag Bodi, Samuel Berweger, Raied Caromi, Nada Golmie
We describe a context-aware channel sounder that consists of three separate systems: a radio-frequency system to extract multipaths scattered from the surrounding environment in the 3D geometrical domain, a Lidar system to generate a point cloud of the

An Adaptable AI Assistant for Network Management

April 12, 2024
Author(s)

Amar Abane, Abdella Battou, Mheni Merzouki

This paper presents a network management AI assistant built with Large Language Models. It adapts at runtime to the network state and specific platform, leveraging techniques like prompt engineering, document retrieval, and Knowledge Graph integration. The

2024 NIST Generative AI (GenAI): Data Creation Specification for Text-to-Text (T2T) Generators

April 1, 2024
Author(s)
Yooyoung Lee, George Awad, Asad Butt, Lukas Diduch, Kay Peterson, Seungmin Seo, Ian Soboroff, Hariharan Iyer
Generator (G) teams will be tested on their system ability to generate content that is indistinguishable from human-generated content. For the pilot study, the evaluation will help determine strengths and weaknesses in their approaches including insights

2024 NIST Generative AI (GenAI): Evaluation Plan for Text-to-Text (T2T) Discriminators

April 1, 2024
Author(s)
Yooyoung Lee, George Awad, Asad Butt, Lukas Diduch, Kay Peterson, Seungmin Seo, Ian Soboroff, Hariharan Iyer
Generator (G) teams will be tested on their system's ability to generate content that is indistinguishable from human-generated content. For the pilot study, the evaluation will help determine strengths and weaknesses in their approaches including insights

AI Use Taxonomy: A Human-Centered Approach

March 26, 2024
Author(s)
Mary Frances Theofanos, Yee-Yin Choong, Theodore Jensen
As artificial intelligence (AI) systems continue to be developed, humans will increasingly participate in human-AI interactions. Humans interact with AI systems to achieve particular goals. To ensure that AI systems contribute positively to human-AI

Improving the TENOR of Labeling: Re-evaluating Topic Models for Content Analysis

March 23, 2024
Author(s)
Zongxia Li, Andrew Mao, Jordan Boyd-Graber, Daniel Stephens, Emily Walpole, Alden A. Dima, Juan Fung
Topic models are a popular tool for understanding text collections, but their evaluation has been a point of contention. Automated evaluation metrics such as coherence are often used, however, their validity has been questioned for neural topic models

Photonic Online Learning

January 9, 2024
Author(s)
Sonia Buckley, Adam McCaughan, Bakhrom Oripov
Training in machine learning necessarily involves more operations than inference only, with higher precision, more memory, and added computational complexity. In hardware, many implementations side-step this issue by designing "inference-only" hardware

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

January 4, 2024
Author(s)
Apostol Vassilev, Alina Oprea, Alie Fordyce, Hyrum Andersen
This NIST AI report develops a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML). The taxonomy is built on survey of the AML literature and is arranged in a conceptual hierarchy that includes key types of ML

Facilitating Stakeholder Communication around AI-Enabled Systems and Business Processes

November 21, 2023
Author(s)
Edward Griffor, Matthew Bundas, Chasity Nadeau, Jeannine Shantz, Thanh Nguyen, Marcello Balduccini, Tran Son
Artificial Intelligence (AI) is often critical to the success of modern business processes. Leveraging it, however, is non-trivial. A major hurdle is communication: discussing system requirements among stakeholders with different backgrounds and goals

2022 OpenFAD Evaluation Plan (Open Fine-grained Activity Detection)

October 2, 2023
Author(s)
Yooyoung Lee, Jonathan Fiscus, Lukas Diduch, Jeffery Byrne
This document describes an evaluation of the 2022 Open Fine-grained Activity Detection (OpenFAD) challenge. The evaluation plan covers resources, task definitions, task conditions, file formats for system inputs and outputs, evaluation metrics, scoring

Labeling Software Security Vulnerabilities

October 1, 2023
Author(s)
Irena Bojanova, John Guerrerio
Labeling software security vulnerabilities would benefit greatly modern artificial intelligence cybersecurity research. The National Vulnerability Database (NVD) partially achieves this via assignment of Common Weakness Enumeration (CWE) entries to Common
Was this page helpful?