Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Publications by: Craig Greenberg (Fed)

Search Title, Abstract, Conference, Citation, Keyword or Author
Displaying 1 - 25 of 34

Extending Explainable Boosting Machines to Scientific Image Data

November 30, 2023
Author(s)
Daniel Schug, Sai Yerramreddy, Rich Caruana, Craig Greenberg, Justyna Zwolak
As the deployment of computer vision technology becomes increasingly common in science, the need for explanations of the system and its output has become a focus of great concern. Driven by the pressing need for interpretable models in science, we propose

NIST 2022 Language Recognition Evaluation Plan

August 31, 2022
Author(s)
Yooyoung Lee, Craig Greenberg, Lisa Mason, Elliot Singer
The 2022 NIST language recognition evaluation (LRE22) is the 9th cycle in an on-going language recognition evaluation series that began in 1996. The objectives of the evaluation series are (1) to advance technologies in language recognition with innovative

Voice Biometrics: Future Trends and ChallengesAhead

September 1, 2021
Author(s)
Doug Reynolds, Craig Greenberg
Voice has become woven into the fabric of everyday human-computer interactions via ubiquitous assistants like Siri, Alexa, Google, Bixby, Viv, etc. The use of voice will only accelerate as speech interfaces move to wearables \citestarner2002role}, vehicles

NIST 2021 Speaker Recognition Evaluation Plan

July 12, 2021
Author(s)
Omid Sadjadi, Craig Greenberg, Elliot Singer, Lisa Mason, Douglas Reynolds
The 2021 Speaker Recognition Evaluation (SRE21) is the next in an ongoing series of speaker recognition evaluations conducted by the US National Institute of Standards and Technology (NIST) since 1996. The objectives of the evaluation series are (1) to

NIST 2020 CTS Speaker Recognition Challenge Evaluation Plan

July 29, 2020
Author(s)
Seyed Omid Sadjadi, Craig S. Greenberg, Elliot Singer, Douglas A. Reynolds, Lisa Mason
Following the success of the 2019 Conversational Telephone Speech (CTS) Speaker Recognition Challenge, which received 1347 submissions from 67 academic and industrial organizations, the US National Institute of Standards and Technology (NIST) will be

NIST Pilot Too Close for Too Long (TC4TL) Challenge Evaluation Plan

June 18, 2020
Author(s)
Seyed Omid Sadjadi, Craig S. Greenberg, Douglas A. Reynolds
One of the keys to managing the current (and future) epidemic is notifying people of possible virus exposure so they can isolate and seek treatment to limit further spread of the disease. While manual contact tracing is effective for notifying those who

The 2019 NIST Audio-Visual Speaker Recognition Evaluation

May 18, 2020
Author(s)
Seyed Omid Sadjadi, Craig S. Greenberg, Elliot Singer, Douglas A. Reynolds, Lisa Mason, Jaime Hernandez-Cordero
In 2019, the U.S. National Institute of Standards and Technology (NIST) conducted the most recent in an ongoing series of speaker recognition evaluations (SRE). There were two components to SRE19: 1) a leaderboard style Challenge using unexposed

The 2019 NIST Speaker Recognition Evaluation CTS Challenge

May 18, 2020
Author(s)
Seyed Omid Sadjadi, Craig S. Greenberg, Elliot Singer, Douglas Reynolds, Lisa Mason, Jaime Hernandez-Cordero
In 2019, the U.S. National Institute of Standards and Technology (NIST) conducted a leaderboard style speaker recognition challenge using conversational telephone speech (CTS) data extracted from the unexposed portion of the Call My Net 2 (CMN2) corpus

The 2018 NIST Speaker Recognition Evaluation

September 15, 2019
Author(s)
Omid Sadjadi, Craig Greenberg, Elliot Singer, Douglas A. Reynolds, Lisa Mason, Jaime Hernandez-Cordero
In 2018, the U.S. national institute of standards and technology (NIST) conducted the most recent in an ongoing series of speaker recognition evaluations (SRE). SRE18 was organized in a similar manner to SRE16, focusing on speaker detection over

A data science challenge for converting airborne remote sensing data into ecological information

February 28, 2019
Author(s)
Sergio Marconi, Sarah J. Graves, Dihong Gong, Shahriari Nia Morteza, Marion Le Bras, Bonnie J. Dorr, Peter Fontana, Justin Gearhart, Craig Greenberg, Dave J. Harris, Sugumar A. Kumar, Agarwal Nishant, Joshi Prarabdh, Sandeep U. Rege, Stephanie A. Bohlman, Ethan P. White, Daisy Z. Wang
In recent years ecology has reached the point where a data science competition could be very productive. Large amounts of open data are increasingly available and areas of shared interest around which to center competitions are increasingly prominent. The

NIST IAD DSE Evaluation Plan 2018

October 16, 2018
Author(s)
Bonnie J. Dorr, Peter Fontana, Craig Greenberg, Marion Le Bras, Maxime Hubert, Alexandre F. Boyer
This document describes the plan for the National Institute of Standards and Technology (NIST) Information Access Division (IAD) Data Science Evaluation (DSE) Series Evaluation to be held starting July 2018 (Tentative). The DSE consists of a series of

Performance Analysis of the 2017 NIST Language Recognition Evaluation

September 2, 2018
Author(s)
Omid Sadjadi, Timothee N. Kheyrkhah, Craig Greenberg, Douglas A. Reynolds, Elliot Singer, Lisa Mason, Jaime Hernandez-Cordero
The 2017 NIST language recognition evaluation (LRE) was held in the autumn of 2017. Similar to the past LRE's, the basic task in LRE17 was language detection, with an emphasis on discriminating closely related languages (14 in total) selected from 5

The 2017 NIST Language Recognition Evaluation

June 26, 2018
Author(s)
Seyed Omid Sadjadi, Timothee N. Kheyrkhah, Audrey N. Tong, Craig S. Greenberg, Douglas Reynolds, Elliot Singer, Lisa Mason, Jaime Hernandez-Cordero
In 2017, NIST conducted the most recent in an ongoing series of Language Recognition Evaluations (LRE) meant to foster research in robust text- and speaker-independent language recognition, as well as measure performance of current state-of-the-art systems

The 2016 NIST Speaker Recognition Evaluation

August 20, 2017
Author(s)
Seyed Omid Sadjadi, Timothee N. Kheyrkhah, Audrey N. Tong, Craig S. Greenberg, Douglas A. Reynolds, Elliot Singer, Lisa Mason, Jaime Hernandez-Cordero
In 2016, NIST conducted the most recent in an ongoing series of speaker recognition evaluations (SRE) to foster research in robust text-independent speaker recognition, as well as measure performance of the current state-of-the-art systems, targeting in

The Impact of Data Dependency on Speaker Recognition Evaluation

February 8, 2017
Author(s)
Jin Chu Wu, Alvin F. Martin, Craig S. Greenberg, Raghu N. Kacker
The data dependency due to multiple use of the same subjects has impact on the standard error (SE) of the detection cost function (DCF) in speaker recognition evaluation. The DCF is defined as a weighted sum of the probabilities of type I and type II

Results of The 2015 NIST Language Recognition Evaluation

September 12, 2016
Author(s)
Hui Zhao, Desire Banse, G R. Doddington, Craig Greenberg, Audrey N. Tong, John M. Howard, Alvin F. Martin, Jaime Hernandez-Cordero, Lisa Mason, Douglas A. Reynolds, Elliot Singer
In 2015, NIST conducted the most recent in an ongoing series of Language Recognition Evaluations (LRE) meant to foster research in language recognition. The 2015 Language Recognition Evaluation (LRE15) featured 20 target languages grouped into 6 language

A New International Data Science Program

August 4, 2016
Author(s)
Bonnie J. Dorr, Craig Greenberg, Peter Fontana, Mark A. Przybocki, Marion Le Bras, Cathryn A. Ploehn, Oleg Aulov, Wo L. Chang
This article sets out to examine foundational issues in data science including current challenges, basic research questions, and expected advances, as the basis for a new Data Science Research Program and associated Data Science Evaluation (DSE) series

Data Science Research Program at NIST Information Access Division

August 4, 2016
Author(s)
Bonnie J. Dorr, Craig Greenberg, Peter Fontana, Mark A. Przybocki, Marion Le Bras, Cathryn A. Ploehn, Oleg Aulov, Edmond J. Golden III, Wo L. Chang
We examine foundational issues in data science including current challenges, basic research questions, and expected advances, as the basis for a new Data Science Initiative and evaluation series, introduced by the Information Access Division at the

Summary of the 2015 NIST Language Recognition i-Vector Machine Learning Challenge

June 24, 2016
Author(s)
Audrey N. Tong, Craig S. Greenberg, Alvin F. Martin, Desire Banse, John M. Howard, G R. Doddington, Danilo B. Romero, Douglas A. Reynolds, Lisa Mason, Tina Kohler, Jaime Hernandez-Cordero, Elliot Singer, Alan McCree, Lisa Mason
In 2015 NIST coordinated the first language recognition evaluation (LRE) that used i-vectors as input, with the goals of attracting researchers outside of the speech processing community to tackle the language recognition problem, exploring new ideas in

The NIST IAD Data Science Evaluation Series: Part of the NIST Information Access Division Data Science Research Program

October 29, 2015
Author(s)
Bonnie J. Dorr, Craig Greenberg, Peter Fontana, Mark A. Przybocki, Marion Le Bras, Cathryn A. Ploehn, Oleg Aulov, Wo L. Chang
The Information Access Division (IAD) of the National Institute of Standards and Technology (NIST) launched a new Data Science Research Program (DSRP) in the fall of 2015. This research program focuses on evaluation-driven research and will establish a new

The NIST IAD Data Science Research Program

October 19, 2015
Author(s)
Bonnie J. Dorr, Peter C. Fontana, Craig S. Greenberg, Mark A. Przybocki, Marion Le Bras, Cathryn A. Ploehn, Oleg Aulov, Martial Michel, Edmond J. Golden III, Wo L. Chang
We examine foundational issues in data science including current challenges, basic research questions, and expected advances, as the basis for a new Data Science Research Program and evaluation series, introduced by the Information Access Division of the

Analysis of the Second Phase of the 2013-2014 i-Vector Machine Learning Challenge

September 10, 2015
Author(s)
Desire Banse, G R. Doddington, Craig Greenberg, John M. Howard, Alvin F. Martin, Daniel Garcia-Romero, John J. Godfrey, Jaime Hernandez-Cordero, Lisa Mason, Alan McCree, Douglas A. Reynolds
In late 2013 and 2014, the National Institute of Standards and Technology (NIST) coordinated an i-vector challenge utilizing data from previous NIST Speaker Recognition Evaluations. Following the evaluation period, a second phase of the challenge was held