Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Publications

NIST Authors in Bold

Displaying 1 - 25 of 89

Cloud Test Data Creation and Population Document

June 7, 2023
Richard Ayers
This test data creation and population document provides an overview of data creation, population and documentation of artifacts created for cloud-based applications specified in Version 1 of Cloud & Remotely Stored Data Extraction (CDX) Using Account

Who Is That? Perceptual Expertise on Other-Race Face Comparisons, Disguised Face Comparisons, and Face Memory

April 20, 2023
Amy Yates, Jacqueline Cavazos, Geraldine Jeckeln, Ying Hu, Eilidh Noyes, Carina Hahn, Alice O'Toole, P. Jonathon Phillips
Forensic facial specialists identify faces more accurately than untrained participants on tests using high quality images of faces. Whether this superiority holds in more challenging conditions is not known. Here, we measured performance for forensic

OpenMFC 2022 Evaluation Program

January 3, 2023
Haiying Guan, Baptiste Chocot, Ilia Ghorbanian Bajgiran, Lukas Diduch, Yooyoung Lee, Christopher Tu

Report of the Digital Evidence Task Group Quality Study

December 15, 2022
Barbara Guttman, Kelly Sauerwein, James R. Lyle
The report describes the results of a project performed by a study group from the Organization of Scientific Area Committees (OSAC) for Forensic Science Digital Evidence Subcommittee to identify the quality practices and management systems that are most

Digital Investigation Techniques: A NIST Scientific Foundation Review

November 21, 2022
James R. Lyle, Barbara Guttman, John Butler, Kelly Sauerwein, Christina Reed, Corrine Lloyd
This document is an assessment of the scientific foundations of digital forensics. We examined descriptions of digital investigation techniques from peer-reviewed sources, academic and classroom materials, technical guidance from professional organizations

GLFF: Global and Local Feature Fusion for Face Forgery Detection

November 16, 2022

Haiying Guan, Yan Ju, Shan Jia, Jialing Cai, Siwei Lyu

With the rapid development of the deep generative models (such as Generative Adversarial Networks and Auto-encoders), AI-synthesized images of human face are now of such high qualities that humans can hardly distinguish them from pristine ones. Although

Results from a Black-Box Study for Digital Examiners

February 17, 2022
Barbara Guttman, Mary T. Laamanen, Craig Russell, James Darnell, Chris Atha
The National Institute of Standards and Technology (NIST) conducted a black-box study in conjunction with a scientific foundation review documented in NISTIR 8354 – Digital Investigation Techniques: A NIST Scientific Foundation Review (initially released

Open Media Forensics Challenge (OpenMFC) 2020-2021: Past, Present, and Future

September 29, 2021
Haiying Guan, Yooyoung Lee, Lukas Diduch, Jesse Zhang, Ilia Ghorbanian Bajgiran, Timothee Kheyrkhah, Peter Fontana, Jonathan G. Fiscus
This document describes the online leaderboard public evaluation program, Open Media Forensics Challenge (OpenMFC) 2021-2022. In the report, first, the introduction, objectives, challenges, contributions, and achievements of the evaluation program are

Dataset construction challenges for digital forensics

July 29, 2021
James R. Lyle, Graeme Horsman
As the digital forensic field develops, taking steps towards ensuring a level of reliability in the processes implemented by its practitioners, emphasis on the need for effective testing has increased. In order to test, test datasets are required, but

User Guide for NIST Media Forensic Challenge (MFC) Datasets

July 6, 2021
Haiying Guan, Andrew Delgado, Yooyoung Lee, Amy Yates, Daniel Zhou, Timothee N. Kheyrkhah, Jonathan G. Fiscus
NIST released a set of Media Forensic Challenge (MFC) datasets developed in DARPA MediFor (Media Forensics) project to the public in the past 5 years. More than 300 individuals, 150 organizations, from 26 countries and regions worldwide use our datasets

Media Forensics Challenge Image Provenance Evaluation and State-of-the-Art Analysis on Large-Scale Benchmark Datasets

October 26, 2020
Xiongnan Jin, Yooyoung Lee, Jonathan G. Fiscus, Haiying Guan, Amy Yates, Andrew Delgado, Daniel F. Zhou
With the development of storage, transmission, editing, and sharing tools, digital forgery images are propagating rapidly. The need for image provenance analysis has never been more timely. Typical applications are content tracking, copyright enforcement

NIST Media Forensic Challenge (MFC) Evaluation 2020 - 4th Year DARPA MediFor PI meeting

July 15, 2020
Jonathan G. Fiscus, Haiying Guan, Yooyoung Lee, Amy Yates, Andrew Delgado, Daniel F. Zhou, Timothee N. Kheyrkhah, Xiongnan Jin
The presentation slides summarize NIST Media Forensic Challenge (MFC) Evaluation, and present MFC20 evaluation reports in DARPA MediFor PI meeting. The slides contains five parts: Overview, Image Tasks, Video Tasks, Camera ID Verification Tasks, Provenance

Standardization of File Recovery Classification and Authentication

December 1, 2019
Eoghan Casey, Alexander J. Nelson, Jessica Hyde
Digital forensics can no longer tolerate software that cannot be relied upon to perform specific functions such as file recovery. The root of this problem is a lack of clearly defined software requirements, which compels users and tool testers to make

Manipulation Data Collection and Annotation Tool for Media Forensics

June 17, 2019
Eric Robertson, Haiying Guan, Mark Kozak, Yooyoung Lee, Amy Yates, Andrew Delgado, Daniel F. Zhou, Timothee N. Kheyrkhah, Jeff Smith, Jonathan G. Fiscus
With the increasing diversity and complexity of media forensics techniques, the evaluation of state-of-the-art detectors are impeded by lacking the metadata and manipulation history ground-truth. This paper presents a novel image/video manipulation

MFC Datasets: Large-Scale Benchmark Datasets for Media Forensic Challenge Evaluation

January 11, 2019
Haiying Guan, Mark Kozak, Eric Robertson, Yooyoung Lee, Amy Yates, Andrew Delgado, Daniel F. Zhou, Timothee N. Kheyrkhah, Jeff Smith, Jonathan G. Fiscus
We provide a benchmark for digital media forensic challenge evaluations. A series of datasets are used to assess the progress and deeply analyze the performance of diverse systems on different media forensic tasks across last two years. The benchmark data
Displaying 1 - 25 of 89