NOTICE: Due to a lapse in annual appropriations, most of this website is not being updated. Learn more.
Form submissions will still be accepted but will not receive responses at this time. Sections of this site for programs using non-appropriated funds (such as NVLAP) or those that are excepted from the shutdown (such as CHIPS and NVD) will continue to be updated.
An official website of the United States government
Here’s how you know
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock (
) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
Media Forensics Challenge Image Provenance Evaluation and State-of-the-Art Analysis on Large-Scale Benchmark Datasets
Published
Author(s)
Xiongnan Jin, Yooyoung Lee, Jonathan G. Fiscus, Haiying Guan, Amy Yates, Andrew Delgado, Daniel F. Zhou
Abstract
With the development of storage, transmission, editing, and sharing tools, digital forgery images are propagating rapidly. The need for image provenance analysis has never been more timely. Typical applications are content tracking, copyright enforcement, and forensics reasoning. However, large-scale image provenance datasets, which contain diverse manipulation history graphs with various manipulation operations and rich metadata, are still needed to facilitate the research. It is one of the major factors that hinders the development of techniques for image provenance analysis. To address this issue, we introduce large-scale benchmark datasets for provenance analysis, namely Media Forensics Challenge-Provenance (MFC-Prov) datasets. Two provenance tasks are designed along with evaluation metrics. Furthermore, extensive analysis is conducted for system performance in terms of accuracy on our datasets.
Jin, X.
, Lee, Y.
, , J.
, Guan, H.
, Yates, A.
, Delgado, A.
and Zhou, D.
(2020),
Media Forensics Challenge Image Provenance Evaluation and State-of-the-Art Analysis on Large-Scale Benchmark Datasets, NIST Interagency/Internal Report (NISTIR), National Institute of Standards and Technology, Gaithersburg, MD, [online], https://doi.org/10.6028/NIST.IR.8325
(Accessed October 9, 2025)