Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Publications by:

Search Title, Abstract, Conference, Citation, Keyword or Author
Displaying 26 - 50 of 101

Manipulation Data Collection and Annotation Tool for Media Forensics

June 17, 2019
Author(s)
Eric Robertson, Haiying Guan, Mark Kozak, Yooyoung Lee, Amy Yates, Andrew Delgado, Daniel F. Zhou, Timothee N. Kheyrkhah, Jeff Smith, Jonathan G. Fiscus
With the increasing diversity and complexity of media forensics techniques, the evaluation of state-of-the-art detectors are impeded by lacking the metadata and manipulation history ground-truth. This paper presents a novel image/video manipulation

Open Speech Analytic Technologies Pilot Evaluation OpenSAT Pilot

February 27, 2019
Author(s)
Frederick R. Byers, Jonathan G. Fiscus, Seyed Omid Sadjadi, Gregory A. Sanders, Mark A. Przybocki
Open Speech Analytic Technologies Pilot Evaluation (OpenSAT) is a new speech analytic technology evaluation series organized by NIST that will begin with a pilot evaluation in the Spring of 2017. The pilot includes three tasks: Speech Activity Detection

MFC Datasets: Large-Scale Benchmark Datasets for Media Forensic Challenge Evaluation

January 11, 2019
Author(s)
Haiying Guan, Mark Kozak, Eric Robertson, Yooyoung Lee, Amy Yates, Andrew Delgado, Daniel F. Zhou, Timothee N. Kheyrkhah, Jeff Smith, Jonathan G. Fiscus
We provide a benchmark for digital media forensic challenge evaluations. A series of datasets are used to assess the progress and deeply analyze the performance of diverse systems on different media forensic tasks across last two years. The benchmark data

Overview of the NIST 2016 LoReHLT Evaluation

November 13, 2017
Author(s)
Audrey N. Tong, Lukasz L. Diduch, Jonathan G. Fiscus, Yasaman Haghpanah, Shudong Huang, David M. Joy, Kay Peterson, Ian M. Soboroff
Initiated in conjunction with DARPA's Low Resource Languages for Emergent Incidents (LORELEI) Program, the NIST LoReHLT (Low Re-source Human Language Technology) evaluation series seeks to incubate research on fundamental natural language processing tasks

MediFor Nimble Challenge Evaluation 2017

August 23, 2017
Author(s)
Jonathan G. Fiscus, Haiying Guan, Yooyoung Lee, Amy Yates, Andrew Delgado, Daniel F. Zhou, David M. Joy, August L. Pereira
NIST presentation slides for DARPA MediFor Program One-Year PI Meeting

MediFor Nimble Challenge Evaluation

April 17, 2017
Author(s)
Jonathan G. Fiscus, Haiying Guan, Yooyoung Lee, Amy Yates, Andrew Delgado, Daniel F. Zhou, Timothee N. Kheyrkhah

TRECVID 2014 -- An Overview of the Goals, Tasks, Data, Evaluation Mechanisms, and Metrics

March 26, 2015
Author(s)
Paul D. Over, Jonathan G. Fiscus, Gregory A. Sanders, David M. Joy, Martial Michel, George Awad, Alan Smeaton, Wessel Kraaij, Georges Quenot
The TREC Video Retrieval Evaluation (TRECVID) 2014 was a TREC-style video analysis and retrieval evaluation, the goal of which remains to promote progress in content-based exploitation of digital video via open, metrics-based evaluation. Over the last