The Multimedia Event Detection (MED) evaluation track is part of the TRECVID Evaluation. The multi-year goal of MED is to assemble core detection technologies into a system that can quickly and accurately search a multimedia collection for user-defined events. An event for MED 2010 is "an activity-centered happening that involves people engaged in process-driven actions with other people and/or objects at a specific place and time".
A user searching for events in multimedia material may be interested in a wide variety of potential events. Since it is an intractable task to build special purpose detectors for each event a priori, a technology is needed that can take as input a definition of the event that a human can use to search a collection of multimedia clips. The MED evaluation series will define events via an event kit which consists of:
- An event name which is an mnemonic title for the event.
- An event definition which is a textual definition of the event.
- An evidential description is a textual listing of attributes that are indicative of an event instance. The evidential description provides a notion of some potential types of visual and acoustic evidence indicating the event's existence but it is not an exhaustive list nor is it to be interpreted as required evidence.
- A set of illustrative video examples each containing an instance of the event. The examples are illustrative in the sense that they help form the definition of the event but they do not demonstrate all possible variability or potential realizations.
NIST maintains an email discussion list to disseminate information. Send requests to join the list to jfiscus at nist dot gov.
MED Task Definition
The MED task is: given an Event Kit, find all clips that contain the event in a video collection.
The MED task is a "multimedia" task in that systems will be expected to detect evidence of the event using either or both the audio and video streams (but not human-created textual metadata) of the clips.
Participants can choose to build system(s) for a one, two, or all three 2010 events.
2010 Event Kits
Three events will be used for the 2010 pilot evaluation. The events will be: "Making a cake", "Batting a run", and "Assembling a shelter". In order to begin a community-wide discussion, the Linguistic Data Consortium has prepared a web page containing event definitions and example instances for each of the three events. The linked examples are meant to inform discussions only and may not be incorporated in the distributed data resources. The actual illustrative examples will be included in the video data resources described below.
MED system performance will be evaluated as specified in the evaluation plan. The evaluation plan contains the metric definitions, scoring instructions, and submission instructions.
Current version: V05
A new collection of Internet multimedia (i.e., video clips containing both audio and video streams) will be provided to registered MED participants.The data, which was collected by the Linguistic Data Consortium, consists of publicly available, user-generated content posted to the various Internet video hosting sites. Instances of the events were collected by specifically searching for target events using text-based Internet search engines. All included data has been reviewed for privacy and offensive material.
The video data collection will be divided into two data sets:
- Development data consisting 1746 total clips (~56 hours). The dev data set includes nominally 50 instances of each of the three MED '10 events and the rest of the clips are not on any of the three MED events.
- Evaluation data consisting 1742 total clips (~59 hours). The Eva data set will include instances of the three events but the actual number of instances will not be release until the evaluation submissions are complete.
In order to obtain the MED-10 dev and Eva corpora, registered sites must complete an evaluation license with LDC. Each site who requires access to the data, either as part of a team or as a standalone researcher, should complete a license.
To complete the evaluation license follow these steps:
- Download the license MED-10 Evaluation License V2.
- Return the completed license to LDC's Membership Office via email at firstname.lastname@example.org. Alternatively you may fax the completed license to LDC at 215-573-2175.
- When you send the completed license to LDC, include the following information:
- Registered TRECVID Team name
- Site/organization name
- Data contact person's name
- Data contact person's email
The designated data contact person for each site will receive automated web download instructions from LDC upon release of the data packages.
Dry Run Evaluation
The dry run period for MED will run until September 8, 2010. Dry run submissions will be accepted at any time during the period. The dry run is an opportunity for developers to make sure they are able to generate valid system output that can be scored with the NIST scoring tools. The actual performance of the system is not of interest during the dry run so developers may feel free to use any method to generate their system output, e.g., a random system, training on the dry run data, etc.
The procedure for participating in the dry run is as follows:
- Obtain the data sets by completing the licensing agreement as specified below. The dry run will use part of the development corpus.
- Download the Dry Run database files.
- Run your MED system(S) on the trials specified in the DRYER_TrialIndex.csv file.
- Package system outputs and send them to NIST per the instructions in the evaluation plan.
Evaluation scripts to support the MED evaluation are within the NIST Framework for Detection Evaluations (F4DE) Version 2.2 Toolkit found on the NIST MIG tools page.
The package contains an MED evaluation primer (found in F4DE-2.2/DEVA/doc/TRECVid-MED10ScoringPrimer.html) of the distribution.
Consult the TRECVID Master schedule.
- May 10, 2010 - Modified the event kit definitions on both the web site and evaluation plan
- June 29, 2010 - Modified the "Cake" event to be "Making a cake". Added the license agreement.
- July 19, 2010 - Added the clip format specification.
- July 29, 2010 - Update the text with corpora details. Added the Dry Run index files. Updated the evaluation plan to V03. Updated the F4DE package details.
- April 6, 2010 - The Eva plan number was changed to V04. No content was changed. This was to solve a web site problem.
- September 7, 2010 - V05 of the evaluation plan added. It now contains the cost function parameters.