Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Publications by: Mark A. Przybocki (Fed)

Search Title, Abstract, Conference, Citation, Keyword or Author
Displaying 1 - 25 of 31

Four Principles of Explainable Artificial Intelligence (Draft)

August 18, 2020
Author(s)
P J. Phillips, Amanda C. Hahn, Peter C. Fontana, David A. Broniatowski, Mark A. Przybocki
We introduce four principles for explainable artificial intelligence (AI) that comprise the fundamental properties for explainable AI systems. They were developed to encompass the multidisciplinary nature of explainable AI, including the fields of computer

2018 National Institute of Standards and Technology Environmental Scan

March 26, 2019
Author(s)
Jason E. Boehm, Heather Evans, Ajitkumar Jillavenkatesa, Maria Nadal, Mark A. Przybocki, Paul Witherell, Rebecca A. Zangmeister
The 2018 National Institute of Standards and Technology Environmental Scan provide an analysis of the external factors that can influence NIST and the fulfillment of its mission as the agency looks to create a strategic plan for the coming years. The

Open Speech Analytic Technologies Pilot Evaluation OpenSAT Pilot

February 27, 2019
Author(s)
Frederick R. Byers, Jonathan G. Fiscus, Seyed Omid Sadjadi, Gregory A. Sanders, Mark A. Przybocki
Open Speech Analytic Technologies Pilot Evaluation (OpenSAT) is a new speech analytic technology evaluation series organized by NIST that will begin with a pilot evaluation in the Spring of 2017. The pilot includes three tasks: Speech Activity Detection

A New International Data Science Program

August 4, 2016
Author(s)
Bonnie J. Dorr, Craig Greenberg, Peter Fontana, Mark A. Przybocki, Marion Le Bras, Cathryn A. Ploehn, Oleg Aulov, Wo L. Chang
This article sets out to examine foundational issues in data science including current challenges, basic research questions, and expected advances, as the basis for a new Data Science Research Program and associated Data Science Evaluation (DSE) series

Data Science Research Program at NIST Information Access Division

August 4, 2016
Author(s)
Bonnie J. Dorr, Craig Greenberg, Peter Fontana, Mark A. Przybocki, Marion Le Bras, Cathryn A. Ploehn, Oleg Aulov, Edmond J. Golden III, Wo L. Chang
We examine foundational issues in data science including current challenges, basic research questions, and expected advances, as the basis for a new Data Science Initiative and evaluation series, introduced by the Information Access Division at the

The NIST IAD Data Science Evaluation Series: Part of the NIST Information Access Division Data Science Research Program

October 29, 2015
Author(s)
Bonnie J. Dorr, Craig Greenberg, Peter Fontana, Mark A. Przybocki, Marion Le Bras, Cathryn A. Ploehn, Oleg Aulov, Wo L. Chang
The Information Access Division (IAD) of the National Institute of Standards and Technology (NIST) launched a new Data Science Research Program (DSRP) in the fall of 2015. This research program focuses on evaluation-driven research and will establish a new

The NIST IAD Data Science Research Program

October 19, 2015
Author(s)
Bonnie J. Dorr, Peter C. Fontana, Craig S. Greenberg, Mark A. Przybocki, Marion Le Bras, Cathryn A. Ploehn, Oleg Aulov, Martial Michel, Edmond J. Golden III, Wo L. Chang
We examine foundational issues in data science including current challenges, basic research questions, and expected advances, as the basis for a new Data Science Research Program and evaluation series, introduced by the Information Access Division of the

Document Image Collection Using Amazon s Mechanical Turk

June 4, 2010
Author(s)
Audrey N. Tong, Mark A. Przybocki
We present findings from a collaborative effort aimed at testing the feasibility of using Amazon s Mechanical Turk as a data collection platform to build a corpus of document images. Experimental design and implementation workflow are described

Translation Adequacy and Preference Evaluation Tool (TAP-ET)

May 28, 2008
Author(s)
Mark A. Przybocki, Kay Peterson, P. S. Bronsart
Evaluation of Machine Translation (MT) technology is often tied to the requirement for tedious manual judgments of translation quality. While automated MT metrology continues to be an active area of research, a well known and often accepted standard metric

NIST 2003 Language Recognition Evaluation

September 1, 2003
Author(s)
Alvin F. Martin, Mark A. Przybocki
The 2003 NIST Language Recognition Evaluation was very similar to the last such NIST evaluation in 1996. It was intended to establish a new baseline of current performance capability for language recognition of conversational telephone speech and to lay

NIST's Assessment of Text Independent Speaker Recognition Performance

November 1, 2002
Author(s)
Mark A. Przybocki, Alvin F. Martin
NIST has coordinated annual evaluations of text-independent speaker recognition since 1996. These evaluations aim to provide important contributions to the direction of research efforts and the calibration of technical capabilities. They are intended to be

NIST's Assessment of Text Independent Speaker Recognition Performance

November 1, 2002
Author(s)
Mark A. Przybocki, Alvin F. Martin
NIST has coordinated annual evaluations of text-independent speaker recognition since 1996. These evaluations aim to provide important contributions to the direction of research efforts and the calibration of technical capabilities. They are intended to be

The NIST Speaker Recognition Evaluations: 1996-2001

December 1, 2001
Author(s)
Alvin F. Martin, Mark A. Przybocki
We discuss the history and purpose of the NIST evaluations of speaker recognition performance. We cover the sites that have participated, the performance measures used, and the formats used to report results. We consider the extent to which there has been

Speaker Recognition in a Multi-Speaker Environment

September 1, 2001
Author(s)
Alvin F. Martin, Mark A. Przybocki
We discuss the multi-speaker tasks of detection, tracking, and segmentation of speakers as included in recent NIST Speaker Recognition Evaluations. We consider how performance for the two-speaker detection task is related to that for the corresponding one

Odyssey Text Independent Evaluation Data

January 1, 2001
Author(s)
Mark A. Przybocki, Alvin F. Martin
We discuss the text-independent data supplied for the 2001: A Speaker Odyssey evaluation track. We cover the data creation and selection process, and we present results restricted to the Odyssey test set for participating systems in the 2000 NIST Speaker