Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Nimble Challenge Evaluation 2018

The Nimble Challenge 2018 (NC2018) Evaluation is the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery. The NC2018 is currently being designed building of experience from the NC2017 Evaluation.  We expect to continue support for the following tasks:

  • Image Manipulation Detection and Localization (Image MDL)- Given a single probe image, detect if the probe was manipulated and provide localization mask(s) indicating where the image was modified.
  • Splice Detection and Localization (Image MDL) - Given two images, detect if a region of a donor image has been spliced into a probe image and, if so, provide two masks indicating the region(s) of the donor image that were spliced into the probe and the region(s) of the probe image that were spliced from the donor.
  • Provenance Filtering (PF) - Given a probe image and a set of images representing a world (i.e., large 1M+ images), return the top N images from the world data set which contributed to creating a probe image.
  • Provenance Graph Building (PGB)- Produce a phylogeny graph for a probe image 
    • Variation 1: End-to-End Provenance - Provenance output produced by processing the large world data set (1M+ images) of images.
    • Variation 2: Oracle Filter Provenance - Provenance output produced from a NIST-provided small (200 image) collection of images.
  • Video Manipulation Detection (Video MDL) - Detect if the probe video was manipulated.

Prospective Nimble participants can subscribe to the Nimble mailing list for announcements by sending a request to the contact below and can take part in the evaluation by completing the registration and license agreements below.

There are many exciting changes being planned for the evaluation cycle: more development resources, a scoring server with a leaderboard style evaluation, manipulation operation-specific system evaluations which allows research to focus on one specific form of manipulations, and detailed, automatic diagnostics of system success/failures.

Tentative Schedule

DatesDevelopment Resources
Now
  • NC 2017 Data Resources available
August 2017
  • Schedule finalized
  • Evaluation Plan revisions posted
December 2017
  • Dry Run Evaluation Period Ends
April  2018
  • Evaluation period (2 week period + world processing)
June 2018
  • Evaluation Workshop
June 18-23
  • CVPR

Documentation

The planning for NC '18 has just begun.  Please share the MFC’18 Call for Participation Flyer with your colleagues. The anticipated changes between NC '17 and NC '18 to be minimal.  Please see the NC '17 Evaluation Plan to begin understanding the structure of the evaluation tasks, data, and metrics.

Signup Procedure

  1. Read the evaluation plan when it is available to become familiar with the evaluation.
  2. Sign and return the Medifor Data Use Agreement to nimble_poc [at] nist.gov (nimble_poc[at]nist[dot]gov)
  3. Sign and return the Nimble 2018 Participation Agreement to nimble_poc [at] nist.gov (nimble_poc[at]nist[dot]gov)
  4. Complete a Dry Run Evaluation.
    • The dry run is an opportunity for developers to make sure they are able to generate valid system output that can be scored with the NIST scoring tools. The actual performance of the system is not of interest during the dry run so developers may feel free to use any method to generate their system output, e.g., a random system, training on the dry run data. Instructions will be posted soon.

Data Resources

By signing up for the evaluation, you’ll be getting a wealth of data resources to conduct your media forensics research.  The data will be designated as either: (1) a development resource that will be accessible at signup, (2) past year evaluation resources provided after completing a dry run evaluation, and (3) the NC’18 evaluation resources during the formal evaluation in April ’18.  Here’s a quick summary of the resources:

Data Set Type

Data Set Name

Number of Forensic Probes (true manipulations and non-manipulations)

World Data Set Size

Reference Annotations

Supported Tasks

Development

NC2016 – Both Nimble Science and Nimble Web

624

 

Full

MDL

NC’17 Development Image Data

3,500

100,000

Full

All

NC’17 Development Video Data

213

 

Full

All

NC’18 Development Image and Video Data

TBD

TBD

Full

TBD

Past Evaluations

NC’17 Evaluation Images

10,000

1,000,000

Full for 1/3 subset

All

NC’17 Evaluation Videos

1,000

 

Full for 1/3 Subset

All

NC ‘18 Evaluation

NC’18 Evaluation Images

50,0000

5,000,000

 

 

NC’18 Evaluation Videos

5,000

 

 

 

Evaluation Tools

NIST provided tools are described in the Evaluation Infrastructure Setup Instructions.

Contact

nimble_poc [at] nist.gov (nimble_poc[at]nist[dot]gov)

Created April 18, 2017, Updated July 31, 2017