Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

TRECVID Multimedia Event Detection 2011 Evaluation

The Multimedia Event Detection (MED) evaluation track is part of the TRECVID Evaluation. The goal of MED is to assemble core detection technologies into a system that can search multimedia recordings for user-defined events based on pre-computed metadata. The metadata stores developed by the systems are expected to be sufficiently general to permit re-use for subsequent user defined events

For MED 2011, an event:

  • is a complex activity occurring at a specific place and time;
  • involves people interacting with other people and/or objects;
  • consists of a number of human actions, processes, and activities that are loosely or tightly organized and that have significant temporal and semantic relationships to the overarching activity;
  • is directly observable. 

 

A user searching for events in multimedia material may be interested in a wide variety of potential events. Since it is an intractable task to build special purpose detectors for each event a priori, technology is needed that can take as input a human-generated definition of the event that a system will use to search a collection of multimedia clips. The MED evaluation series will define events via an event kit which consists of:

  • An event name which is a mnemonic title for the event.
  • An event definition which is a textual definition of the event.
    An event explication which is a textual exposition of the terms and concepts used in the event definition, at least those not commonly known.
  • An evidential description is a textual listing of attributes that are indicative of an event instance. The evidential description provides a notion of some potential types of visual and acoustic evidence indicating the possibility of an event's existence but it is not an exhaustive list nor is it to be interpreted as required evidence.
  • A set of illustrative video examples each containing an instance of the event. The examples are illustrative in the sense that they help form the definition of the event but they do not demonstrate all possible variability or potential realizations.   

MED Task Definition

The MED task is: given an Event Kit, detect the occurrence of an event within a multimedia clip. 

The MED task is a "multimedia" task in that systems will be expected to detect evidence of the event using either or both the audio and video streams (but not human-created textual metadata) of the clips.

Participants must build a system for at least one of the test events in order to participate in the evaluation and TRECVID Conference. 

Information Dissemination

NIST maintains an email discussion list to disseminate information. Send requests to join the list to med_poc at nist dot gov.

Evaluation Plan and Data Use Rules

MED system performance will be evaluated as specified in the evaluation plan.   The evaluation plan contains the rules, protocols, metric definitions, scoring instructions, and submission instructions. 

The current version is MED11-EvalpPlan-V03-20110801a

The differences between V03 and V02 (highlighted in the comparison version between V03 and v02) include wording changes Section 1.0 and 2.2.2. 

Section 3.3 of the evaluation plan defines the rules for using the testing resources. Both the the structure of the data resources and rules for using the testing resources are complicated.  In order to ensure participants properly utilize the data, Section 5.0 "MED Data Use Guidance" of the README from the MED-11 Documentation and Metadata Distribution (from LDC distribution LDC2011E42) contains a description of the MED '11 corpora components and their allowed use.  

Additionally, NIST is hosting Q&A telecons to answer questions. Answers to the asked questions can be found in the MED 11 Data Use FAQ. Participants should contact NIST if these resources fail to answer their questions.

Data Resources

A collection of Internet multimedia (i.e., clips containing both audio and video streams) will be provided to registered MED participants. The data, which was collected by the Linguistic Data Consortium, consists of publicly available, user-generated content posted to the various Internet video hosting sites.

Participants will receive the MED '10 data set and new data resources for the MED '11 evaluation.  The new resources include the following: 

  1. 15 new event kits -- 5 will be designated as training events and 10 will be designated as testing events. 
  2. A collection of video for system development called the Transparent Development (DEV-T) collection. The DEV-T collection will include instances of the 5 training events. The final size of the DEV-T collection will be ~370 hrs of clips.
  3. A collection of video for system evaluation called the Opaque Development (DEV-O) collection.  The DEV-O corpus will contain instances of the testing events. The size of the DEV-O collection will be ~1200 hrs of clips.

The evaluation plan will specify usage rules of the DEV-T and DEV-O collections in full detail.

2011 Event Kits

Fifteen new events will be defined for the 2011 evaluation.  The events will be separated into training events and testing events according to the following table of event names. 

Training Event Names Testing Event Names
Attempting a board trick
Feeding an animal
Landing a fish
Wedding ceremony
Working on a woodworking project
Birthday party
Changing a vehicle tire
Flash mob gathering
Getting a vehicle unstuck
Grooming an animal
Making a sandwich
Parade
Parkour
Repairing an appliance
Working on a sewing project
 

Event names need to be interpreted in the full context of the event definitions that will be made available as part of the event kits released on March 1st. 

Last year, three events were used for the 2010 pilot evaluation. Those events were: "Making a cake", "Batting a run", and "Assembling a shelter". The Linguistic Data Consortium created a web page containing the details of the 2010 Event Kit definitions. The linked examples defined in an event kit are meant to inform discussions only and are not incorporated in the MED-10 distributed data resources.  

Video data

Clips will be provided in MPEG-4 formatted files. The video will be encoded to the H.264 standard. The audio will be encoded using MPEG-4'S Advanced Audio Coding (AAC) standard.

Data releases

The video data collection will be released in two parts.

  1. The first part of the data collection will be made available to registered participants on March 1st.  The release will contain the MED '10 data, DEV-T Part 1, and the Event Kits.  The release will consist of an ext3-formatted disk drive for the video and a web download for the textual data.
  2. The second, and final, part will be made available to registered participants on June 24th. The release will include an ext3-formatted disk drive release which will include DEV-T Part 2 and the DEV-O collection and an update to the web download for textual data.

Data Licensing

In order to obtain the MED-10 and MED-11 corpora, TRECVID-registered sites must complete an evaluation license with LDC. Each site who requires access to the data, either as part of a team or as a standalone researcher, must complete a license.

To complete the evaluation license follow these steps:

  1. Download the license MED-11 Evaluation License.
  2. Return the completed license to LDC's Membership Office via email at ldc [at] ldc.upenn.edu (ldc[at]ldc[dot]upenn[dot]edu). Alternatively you may fax the completed license to LDC at 215-573-2175.
  3. When you send the completed license to LDC, include the following information:
    • Registered TRECVID Team name
    • Site/organization name
    • Data contact person's name
    • Data contact person's email

The designated data contact person for each site will receive instructions from LDC about the specific procedures for obtaining the data packages when they are released.

Dry Run Evaluation

The dry run period for MED will run until July 29, 2011. Dry run submissions will be accepted at any time during the period.  The dry run is an opportunity for developers to make sure they are able to generate valid system output that can be scored with the NIST scoring tools.  The actual performance of the system is not of interest during the dry run so developers may feel free to use any method to generate their system output, e.g., a random system, training on the dry run data, etc.

The procedure for participating in the dry run is as follows:

  1. Obtain the data sets by completing the licensing agreement as specified above. The dry run will use files from the DEV-T collection.
  2. Download the Dry Run database files.
  3. Run your MED system(s) on the trials specified in the DRYRUN_TrialIndex.csv file.
  4. Install the Evaluation Tools per the "Evaluation Tools" Section below.   
  5. Package system outputs per the instructions in V02 of the evaluation plan and the scoring primer.  This includes the following sub-steps.
    1. Building the submission directory structure.
    2. Validating your submission with the validator.
    3. Self-scoring your submission with the DEVA_cli scoring tool.
  6. Send the system outputs to NIST per the instructions in the evaluation plan.
  7. NIST will provide scoring reports to the site (and only the site).
  8. The site compares NIST output with self-scored output.

Evaluation Tools

Evaluation scripts to support the MED evaluation are within the NIST Framework for Detection Evaluations (F4DE) Toolkit Version 2.3 or later found on the NIST MIG tools page.

The package contains an MED evaluation primer (found in F4DE-2.3/DEVA/doc/TRECVid-MED11-ScoringPrimer.html) of the distribution.

Schedule

Consult the TRECVID Master schedule

Revision History

  • Feb 15, 2011 - Initial page created.
  • June 26, 2011 - Added details about the dry run. Updated the eval plan to V02
  • August 7, 2011 - Added V03 of the eval plan./
Created January 31, 2011, Updated June 2, 2021