Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Media Forensics Challenge 2018

The Media Forensics Challenge 2018 (MFC2018) Evaluation is the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery. The MFC2018 evaluation is currently being designed building of off experience from the NC2017 Evaluation.  We expect to continue support for the following tasks:

  • Image Manipulation Detection and Localization (Image SDL)- Given a single probe image, detect if the probe was manipulated and provide localization mask(s) indicating where the image was modified.
  • Splice Detection and Localization (Image MDL) - Given two images, detect if a region of a donor image has been spliced into a probe image and, if so, provide two masks indicating the region(s) of the donor image that were spliced into the probe and the region(s) of the probe image that were spliced from the donor.
  • Provenance Filtering (PF) - Given a probe image and a set of images representing a world (i.e., large 5M+ images), return the top N images from the world data set which contributed to creating a probe image.
  • Provenance Graph Building (PGB)- Produce a phylogeny graph for a probe image 
    • Variation 1: End-to-End Provenance - Provenance output produced by processing the large world data set (5M+ images) of images.
    • Variation 2: Oracle Filter Provenance - Provenance output produced from a NIST-provided small (200 image) collection of images.
  • Video Manipulation Detection (Video MDL) - Detect if the probe video was manipulated.

Prospective MFC participants can subscribe to the MFC mailing list for announcements by sending a request to the contact below and can take part in the evaluation by completing the registration and license agreements below.

There are many exciting changes being planned for the evaluation cycle: more development resources, a scoring server with a leader-board style evaluation, manipulation operation-specific system evaluations to allow manipulation type-specific research, and detailed, automatic diagnostics of system success/failures.

Tentative Schedule

Dates Development Resources
Now
  • NC 2017 Data Resources available
August 2017
  • Schedule finalized
  • Evaluation Plan revisions posted
September 13, 2017
  • Evaluation Plan revisions posted
September 29, 2017
  • Release MFCDev1
December 1, 2017
  • Scoring Server Online
    • Dry Run begins
    • Retest scoring begins
December 8, 2017
  • Retest/Dry Run Evaluation Period Ends
December 15, 2017
  • Retest reported to Teams and DARPA
January 19, 2018
  • Release MFCDev2
March 21, 2018
  • World data distributed
March 28, 2018
  • World data unlocked 2:00 pm EDT
April 4, 2018
  • Eval Probes distributed
April 11, 2018
  • Release of the image probe decryption keys.
April 18, 2018
  • Tentative release of the video probe decryption keys.
May 2, 2018
  • Team Submissions Due 2:00 pm EDT for:
    • Manipulation Detection and Localization
    • Splice Detection and Localization
    • Provenance Filtering
    • End-to-End Provenance Graph
May 4, 2018
  • Oracle Provenance Data released
May 9, 2018
  • Oracle Provenance Submissions Due 2:00 pm EDT
May 16, 2018
  • Scores released to participants

Documentation

The MFC18 Evaluation Plan details the structure of the evaluation tasks, data, and metrics.

Signup Procedure

  1. Read the evaluation plan when it is available to become familiar with the evaluation.
  2. Sign and return the Media for Data Use Agreement to mfc_poc [at] nist.gov (mfc_poc[at]nist[dot]gov)
  3. Sign and return the Media Forensics Challenge 2018 Participation Agreement to mfc_poc [at] nist.gov (mfc_poc[at]nist[dot]gov)
  4. Complete a Dry Run Evaluation.
    • The dry run is an opportunity for developers to make sure they are able to generate valid system output that can be scored with the NIST scoring tools. The actual performance of the system is not of interest during the dry run so developers may feel free to use any method to generate their system output, e.g., a random system, training on the dry run data. Instructions will be posted soon.

Data Resources

By signing up for the evaluation, you’ll be getting a wealth of data resources to conduct your media forensics research.  The data will be designated as either: (1) a development resource that will be accessible at signup, (2) past year evaluation resources which will be provided after completing a dry run evaluation, and (3) the MFC’18 evaluation resources provided during the formal evaluation in April ’18.  Here’s a quick summary of the resources:

Data Set Type

Data Set Name

Number of Forensic Probes (true manipulations and non-manipulations)

World Data Set Size

Reference Annotations

Supported Tasks

Development

NC2016 – Both Nimble Science and Nimble Web

624

 

Full

MDL

NC’17 Development Image Data

3,500

100,000

Full

All

NC’17 Development Video Data

213

 

Full

All

MFC’18 Development Image and Video Data

TBD

TBD

Full

TBD

Past Evaluations

NC’17 Evaluation Images

10,000

1,000,000

Full for 1/3 subset

All

NC’17 Evaluation Videos

1,000

 

Full for 1/3 Subset

All

MFC ‘18 Evaluation

MFC’18 Evaluation Images

50,000

5,000,000

 

 

MFC’18 Evaluation Videos

5,000

 

 

 

Evaluation Tools

NIST-provided tools are described in the Evaluation Infrastructure Setup Instructions.

Contact

mfc_poc [at] nist.gov (mfc_poc[at]nist[dot]gov)

Created July 5, 2017, Updated December 10, 2019