Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Publications by: Timothee Kheyrkhah (IntlCtr)

Search Title, Abstract, Conference, Citation, Keyword or Author
Displaying 1 - 12 of 12

User Guide for NIST Media Forensic Challenge (MFC) Datasets

July 6, 2021
Author(s)
Haiying Guan, Andrew Delgado, Yooyoung Lee, Amy Yates, Daniel Zhou, Timothee N. Kheyrkhah, Jonathan G. Fiscus
NIST released a set of Media Forensic Challenge (MFC) datasets developed in DARPA MediFor (Media Forensics) project to the public in the past 5 years. More than 300 individuals, 150 organizations, from 26 countries and regions worldwide use our datasets

NIST Media Forensic Challenge (MFC) Evaluation 2020 - 4th Year DARPA MediFor PI meeting

July 15, 2020
Author(s)
Jonathan G. Fiscus, Haiying Guan, Yooyoung Lee, Amy Yates, Andrew Delgado, Daniel F. Zhou, Timothee N. Kheyrkhah, Xiongnan Jin
The presentation slides summarize NIST Media Forensic Challenge (MFC) Evaluation, and present MFC20 evaluation reports in DARPA MediFor PI meeting. The slides contains five parts: Overview, Image Tasks, Video Tasks, Camera ID Verification Tasks, Provenance

2018 MediFor Challenge

July 23, 2019
Author(s)
Jonathan G. Fiscus, Haiying Guan, Andrew Delgado, Timothee N. Kheyrkhah, Yooyoung Lee, Daniel F. Zhou, Amy Yates
Media forensics is the science and practice of determining the authenticity and establishing the integrity of audio and visual media. DARPA's Media Forensics (MediFor) program brings together world-class researchers to develop technologies for the

Manipulation Data Collection and Annotation Tool for Media Forensics

June 17, 2019
Author(s)
Eric Robertson, Haiying Guan, Mark Kozak, Yooyoung Lee, Amy Yates, Andrew Delgado, Daniel F. Zhou, Timothee N. Kheyrkhah, Jeff Smith, Jonathan G. Fiscus
With the increasing diversity and complexity of media forensics techniques, the evaluation of state-of-the-art detectors are impeded by lacking the metadata and manipulation history ground-truth. This paper presents a novel image/video manipulation

MFC Datasets: Large-Scale Benchmark Datasets for Media Forensic Challenge Evaluation

January 11, 2019
Author(s)
Haiying Guan, Mark Kozak, Eric Robertson, Yooyoung Lee, Amy Yates, Andrew Delgado, Daniel F. Zhou, Timothee N. Kheyrkhah, Jeff Smith, Jonathan G. Fiscus
We provide a benchmark for digital media forensic challenge evaluations. A series of datasets are used to assess the progress and deeply analyze the performance of diverse systems on different media forensic tasks across last two years. The benchmark data

Performance Analysis of the 2017 NIST Language Recognition Evaluation

September 2, 2018
Author(s)
Omid Sadjadi, Timothee N. Kheyrkhah, Craig Greenberg, Douglas A. Reynolds, Elliot Singer, Lisa Mason, Jaime Hernandez-Cordero
The 2017 NIST language recognition evaluation (LRE) was held in the autumn of 2017. Similar to the past LRE's, the basic task in LRE17 was language detection, with an emphasis on discriminating closely related languages (14 in total) selected from 5

The 2017 NIST Language Recognition Evaluation

June 26, 2018
Author(s)
Seyed Omid Sadjadi, Timothee N. Kheyrkhah, Audrey N. Tong, Craig S. Greenberg, Douglas Reynolds, Elliot Singer, Lisa Mason, Jaime Hernandez-Cordero
In 2017, NIST conducted the most recent in an ongoing series of Language Recognition Evaluations (LRE) meant to foster research in robust text- and speaker-independent language recognition, as well as measure performance of current state-of-the-art systems

The 2016 NIST Speaker Recognition Evaluation

August 20, 2017
Author(s)
Seyed Omid Sadjadi, Timothee N. Kheyrkhah, Audrey N. Tong, Craig S. Greenberg, Douglas A. Reynolds, Elliot Singer, Lisa Mason, Jaime Hernandez-Cordero
In 2016, NIST conducted the most recent in an ongoing series of speaker recognition evaluations (SRE) to foster research in robust text-independent speaker recognition, as well as measure performance of the current state-of-the-art systems, targeting in

MediFor Nimble Challenge Evaluation

April 17, 2017
Author(s)
Jonathan G. Fiscus, Haiying Guan, Yooyoung Lee, Amy Yates, Andrew Delgado, Daniel F. Zhou, Timothee N. Kheyrkhah