Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Publications

Search Title, Abstract, Conference, Citation, Keyword or Author
  • Published Date
Displaying 176 - 200 of 444

NVLAP Federal Warfare System(s)

July 21, 2021
Author(s)
Bradley Moore, John Matyjas, Raymond Tierney, Jesse Angle, Jeannine Abiva, Jeff Hanes, David Dobosh, John Avera
NIST Handbook 150-872 presents the technical requirements and guidance for the accreditation of laboratories under the National Voluntary Laboratory Accreditation Program (NVLAP) Federal Warfare System(s) (FWS) program. It is intended for information and

Challenges of Accuracy in Germline Clinical Sequencing Data

July 20, 2021
Author(s)
Justin Zook, Ryan Poplin, Mark DePristo
Physicians are increasingly using clinical sequencing tests to establish diagnoses of patients who might have genetic disorders, which means that accuracy of sequencing and interpretation are important elements in ensuring the benefits of genetic testing

NIST 2021 Speaker Recognition Evaluation Plan

July 12, 2021
Author(s)
Omid Sadjadi, Craig Greenberg, Elliot Singer, Lisa Mason, Douglas Reynolds
The 2021 Speaker Recognition Evaluation (SRE21) is the next in an ongoing series of speaker recognition evaluations conducted by the US National Institute of Standards and Technology (NIST) since 1996. The objectives of the evaluation series are (1) to

TREC Deep Learning Track: Reusable Test Collections in the Large Data Regime

July 11, 2021
Author(s)
Ellen M. Voorhees, Ian Soboroff, Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos
The TREC Deep Learning (DL) Track studies ad hoc search in the large data regime, meaning that a large set of human-labeled training data is available. Results so far indicate that the best models with large data are likely deep neural networks. This paper

User Guide for NIST Media Forensic Challenge (MFC) Datasets

July 6, 2021
Author(s)
Haiying Guan, Andrew Delgado, Yooyoung Lee, Amy Yates, Daniel Zhou, Timothee N. Kheyrkhah, Jonathan G. Fiscus
NIST released a set of Media Forensic Challenge (MFC) datasets developed in DARPA MediFor (Media Forensics) project to the public in the past 5 years. More than 300 individuals, 150 organizations, from 26 countries and regions worldwide use our datasets

DeepNetQoE: Self-adaptive QoE Optimization Framework of Deep Networks

June 24, 2021
Author(s)
Hamid Gharavi
Future advances in deep learning and its impact on the development of artificial intelligence (AI) in all fields depends heavily on data size and computational power. Sacrificing massive computing resources in exchange for better precision rates of the

Securing AI Testbed (Dioptra) Documentation

June 14, 2021
Author(s)
Harold Booth, James Glasbrenner, Howard Huang, Cory Miniter, Julian Sexton
The NCCoE has built an experimentation testbed to begin to address the broader challenge of evaluation for attacks and defenses. The testbed aims to facilitate security evaluations of ML algorithms under a diverse set of conditions. To that end, it has a

Ray-based framework for state identification in quantum dot devices

June 7, 2021
Author(s)
Justyna Zwolak, Thomas McJunkin, Sandesh Kalantre, Samuel Neyens, Evan MacQuarrie, Mark A. Eriksson, Jacob Taylor
Quantum dots (QDs) defined with electrostatic gates are a leading platform for a scalable quantum computing implementation. However, with increasing numbers of qubits, the complexity of the control parameter space also grows. Traditional measurement

Exact Tile-Based Segmentation Inference for Images Larger than GPU Memory

June 3, 2021
Author(s)
Michael P. Majurski, Peter Bajcsy
We address the problem of performing exact (tiling-error free) out-of-core semantic segmentation inference of arbitrarily large images using fully convolutional neural networks (FCN). FCN models have the property that once a model is trained, it can be
Was this page helpful?