The Media Forensics Challenge 2019 (MFC2019) Evaluation is the third annual evaluation to support research and help advance the state of the art for image and video forensics technologies. The MFC2019 evaluation is currently being designed building of off experience from the MFC2018 Evaluation. We expect to continue support for the following tasks:
- Image Manipulation Detection and Localization (Image SDL)- Given a single probe image, detect if the probe was manipulated and provide localization mask(s) indicating where the image was modified.
- Splice Detection and Localization (Image MDL) - Given two images, detect if a region of a donor image has been spliced into a probe image and, if so, provide two masks indicating the region(s) of the donor image that were spliced into the probe and the region(s) of the probe image that were spliced from the donor.
- Provenance Filtering (PF) - Given a probe image and a set of images representing world images, return the top N images from the world data set which contributed to creating a probe image.
- Provenance Graph Building (PGB)- Produce a phylogeny graph for a probe image
- Variation 1: End-to-End Provenance - Provenance output produced by processing the large world data set of images.
- Variation 2: Oracle Filter Provenance - Provenance output produced from a NIST-provided small (200 image) collection of images.
- Video Manipulation Detection (Video MDL) - Detect if the probe video was manipulated.
Prospective MFC participants can subscribe to the MFC mailing list for announcements by sending a request to the contact below and can take part in the evaluation by completing the registration and license agreements below.
Signup Procedure
- Read the evaluation plan when it is available to become familiar with the evaluation.
- Sign and return the Media for Data Use Agreement to mfc_poc [at] nist.gov (mfc_poc[at]nist[dot]gov)
- Sign and return the Media Forensics Challenge 2019 Participation Agreement to mfc_poc [at] nist.gov (mfc_poc[at]nist[dot]gov)
- Complete a Dry Run Evaluation.
- The dry run is an opportunity for developers to make sure they are able to generate valid system output that can be scored with the NIST scoring tools. The actual performance of the system is not of interest during the dry run so developers may feel free to use any method to generate their system output, e.g., a random system, training on the dry run data. Instructions will be posted soon.
Evaluation Tools
NIST-provided tools are described in the Evaluation Infrastructure Setup Instructions.
Contact
mfc_poc [at] nist.gov (mfc_poc[at]nist[dot]gov)