Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

TRECVID Multimedia Event Detection 2012 Evaluation

2012 TRECVID Multimedia Event Detection Track

The Multimedia Event Detection (MED) evaluation track is part of the TRECVID Evaluation. The 2012 evaluation will be the second MED evaluation which was preceded by the 2011 evaluation and the 2010 Pilot evaluation.

The goal of MED is to assemble core detection technologies into a system that can search multimedia recordings for user-defined events based on pre-computed metadata. The metadata stores developed by the systems are expected to be sufficiently general to permit re-use for subsequent user defined events.

A user searching for events in multimedia material may be interested in a wide variety of potential events. Since it is an intractable task to build special purpose detectors for each event a priori, technology is needed that can take as input a human-generated definition of the event that a system will use to search a collection of multimedia clips. The MED evaluation series will define events via an event kit which consists of an event name, definition, explication (textual exposition of the terms and concepts), evidential descriptions, and illustrative video exemplars.

The major changes for the 2012 evaluation include:

  • a new test collection of over 4000 hours of multimedia clips,
  • 10 new test events,
  • the introduction of the Pilot Ad Hoc event task with 5 new events,
  • a new evaluation condition reducing the set of event exemplars, and
  • new participants must complete the Dry Run evaluation in order to receive this year's test data.

MED Task Definitions

The 2012 evaluation will support two evaluation tasks:

  • Pre-Specified Event MED: WITH knowledge of the pre-specified test event kits, construct a metadata store for the test videos and then for each pre-specified test event kit, search the metadata store to detect occurrences of the test event.
  • Ad Hoc Event MED: WITHOUT knowledge of the ad hoc test event kits, construct a metadata store for the test videos and then for each ad hoc test event kit, search the metadata store to detect occurrences of the test event.

The Pre-Specified event task is identical to the MED11 task. Participants must build a system for at least one of the test events in order to participate in the evaluation and TRECVID Conference.

Information Dissemination

NIST maintains an email discussion list to disseminate information. Send requests to join the list to med_poc at nist dot gov.

Evaluation Plan and Data Use Rules

MED system performance will be evaluated as specified in the evaluation plan. The evaluation plan contains the rules, protocols, metric definitions, scoring instructions, and submission instructions. The latest final version is V02.

MED12-EvalPlan-V03 (V03 compared to V01)

In addition, the MED '12 FAQ is provided to answer specific questions about the evaluation plan.

Data Resources

A collection of Internet multimedia (i.e., clips containing both audio and video streams) will be provided to registered MED participants. The data, which was collected by the Linguistic Data Consortium, consists of publicly available, user-generated content posted to the various Internet video hosting sites.

MED12 Participants will receive following training resources:

  • the MED10 data set, the MED11 DEV-T set,
  • the MED11 Test Collection, and
  • a new testing set called the "PROGRESS Test" Collection. This data set will be made available to previous MED11 participants that submitted results to NIST and new participants that complete the MED12 Dry Run.

The evaluation plan and license information will specify usage rules of the data resources in full detail.

2012 Pre-Specified Event Kits

Twenty events will be used in the 2012 Pre-Specified event task: 10 MED11 Testing events and 10 new events. The table below contains the event names.

MED11 Test Event Names MED12 New Events
Birthday party
Changing a vehicle tire
Flash mob gathering
Getting a vehicle unstuck
Grooming an animal
Making a sandwich
Parade
Parkour
Repairing an appliance
Working on a sewing project
Attempting a bike trick
Cleaning an appliance
Dog show
Giving directions to a location
Marriage proposal
Renovating a home
Rock climbing
Town hall meeting
Winning a race without a vehicle
Working on a metal crafts project

Event names need to be interpreted in the full context of the event definitions that will be made available as part of the event kits.

Video data

Clips will be provided in MPEG-4 formatted files. The video will be encoded to the H.264 standard. The audio will be encoded using MPEG-4'S Advanced Audio Coding (AAC) standard.

Data Licensing

In order to obtain the MED10 and MED11 corpora, TRECVID-registered sites must complete an evaluation license with the LDC. Each site who requires access to the data, either as part of a team or as a standalone researcher, must complete a license.

To complete the evaluation license follow these steps:

  1. Download the license MED12 Evaluation License.
  2. Return the completed license to LDC's Membership Office via email at ldc at ldc dot upenn dot edu. Alternatively you may fax the completed license to the LDC at 215-573-2175.
  3. When you send the completed license to the LDC, include the following information:
    • Registered TRECVID Team name
    • Site/organization name
    • Data contact person's name
    • Data contact person's email

The designated data contact person for each site will receive instructions from the LDC about the specific procedures for obtaining the data packages when they are released.

Dry Run Evaluation

The dry run period for MED will run until July 30, 2012. Dry run submissions will be accepted at any time during the period. The dry run is an opportunity for new developers to make sure they are able to generate valid system output that can be scored with the NIST scoring tools and is required for any participant that has not completed a previous MED evaluation. As discussed in the Data Resources Section, sites are required to complete the Dry Run and will not receive the Progress Set (the evaluation data for 2012) until they complete the Dry Run.

The actual performance of the system is not of interest during the dry run so developers may feel free to use any method to generate their system output, e.g., a random system, training on the dry run data, etc.

The procedure for participating in the dry run is as follows:

  1. Obtain the data sets by completing the licensing agreement as specified above. The dry run will use files from the MED11 Test collection.
  2. Download the MED12 Dry Run database files.
  3. Run your MED system(s) on the trials specified in the MED12_DRYRUN_TrialIndex.csv file.
  4. Install the Evaluation Tools per the "Evaluation Tools" Section below.
  5. Package system outputs per the instructions in V02 of the evaluation plan and the scoring primer. This includes the following sub-steps.
    1. Building the submission directory structure.
    2. Validating your submission with the validator.
    3. Self-scoring your submission with the DEVA_cli scoring tool.
  6. Send the system outputs to NIST per the instructions in the evaluation plan.
  7. NIST will provide scoring reports to the site (and only the site).
  8. The site compares NIST output with self-scored output.

Evaluation Tools

Evaluation scripts to support the MED evaluation are within the NIST Framework for Detection Evaluations (F4DE) Toolkit Version 2.3.4 or later found on the NIST MIG tools page.

The package contains an MED evaluation primer (found in DEVA/doc/TRECVid-MED11-ScoringPrimer.html) of the distribution.

To obtain the TrialIndex to validate ADHOC submissions , download: PROGTEST-ADHOC12_20120507_TrialIndex.csv

Schedule

Consult the TRECVID Master schedule.

Revision History

  • Feb 21, 2011 - Initial page created.
Created February 21, 2012, Updated June 2, 2021