Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

MED 11 Data Use FAQ3

FAQ #1:Can we use knowledge from the event kits (text, positive clips, and related clips) for the test events?

For example, we want to built some high-related semantic concepts for these test events. Of course, these concepts are selected base on the event kits. Is this OK?

Answer: Yes, this OK.

Per Section 1 of the Evaluation Plan, "the metadata store may be optimized with knowledge of the events to be evaluated". Your semantic concepts form knowledge in the metadata store. Further, Section 3.1 of the Evaluation Plan states "All resources in the event kits can be used for development and testing."

FAQ #2: Are there any positive Event006-015 clips in the DEVT Collection?

Answer: The majority of positive clips identified for E006-E015 are in the event kits. There are some positive clips and may be unidentified positive clips in the DEVT collection. Researchers should post newly found positives in the DEVT collection to the HAVIC corpus issue tracker website via your team's Data Contact.

FAQ #3: Are we allowed to use the DEVT clips to prepare for the Dry Run?

Answer: Yes. As described under the Dry Run section on the MED '11 website, developers can use the Event Kits and/or the DEVT collection to prepare their Dry Run submission -- even clips in the dry run database files. The Dry Run is a test of the evaluation infrastructure, not system performance.

FAQ #4: Can we get negative examples from the Test Set?

Answer: No. Per Section 3.3 of the Evaluation Plan, no knowledge should be gained from procession the Test Set. Only the CDR should be generated. Fully automatic feature adaptation is allowed during CDR generation of the test set per the first bulleted item in Section 3.3 of the Evaluation Plan.

FAQ #5: Can HAVIC videos be used in presentations?

Answer: Yes. The MED '11 license agreement states "User and User's Research Group may include limited excerpts from the Data in articles, reports and other documents describing the results of work performed in the MED11 evaluation and of research toward improved MED performance."

ALADDIN contractors must obscure faces when displaying HAVIC images in public forums. Non-ALADDIN contractors are encouraged to do the same to protect people's identities to the extent possible.

FAQ #6: "Near_miss" and "related" clips are similar, can you explain the difference?

Answer: The INSTANCE_Types are defined in Table 2 of the Evaluation Plan. "Near_miss" clips are closer to the event including many aspects of the event. "Related" clips are a much broader class and logically subsumes "Near_miss". The distinction is subjective.

Not all clips were judged against all instance types. Where clips were judged against multiple instance types, the most restrictive membership was used.

FAQ #7: Will the "near_miss" and "related" categories be used in the evaluation?

Answer: The "Primary Measures" summarizing system performance will not use these annotations meaning these clips will be considered negative examples of the events. However, NIST will perform conditioned analysis using these annotations to help identify strengths and weaknesses of systems.

FAQ #8: On page 13 of the Evaluation Plan, parallel processing steps contribute to TPT as a single step, was this intended?

Answer: Yes. This is different than previous evaluations. The processing time for parallelized operations is not multiplied by the number of processor units. The computing hardware in your system description (Appendix B.1 - Sections 3 and 4 of the system descriptions) should document the composition of your cluster.

FAQ #9: The banned clip file (located in the LDC12011E42 release) contains 44 banned files not in the LDC2011E06 and LDC2011E41 releases. Why are they listed?

Answer: The clip location file covers all MED data including LDC2010E47_{01-40} (i.e., the MED '10 resources released in 2010). The 44 banned files were released in 2010, but not again in the subsequent releases.

FAQ #10: Can we optimize parameters for the Training and Testing events using the DEVT Collection?

Answer: Yes. Per Section 5.0 of the MED-11 Documentation and Metadata README, there is unrestricted use of the event kits in conjunction with the DEVT Collection.

FAQ #11: Can we use external detectors, like person detectors, trained on other images.

Answer: Yes, as long as the code is run in-house.

FAQ #12: Can we use sounds downloaded from FreeSound.org and data from other sources like YouTube?

Answer: The rules of the MED evaluation do not prevent the use of non-HAVIC data, however the site should make sure they have the legal right to use the data. ALADDIN contractors must request changes in research data through the process established by the program via your team's PI.

FAQ #13: We have noticed that processing times of the search collections will be a limiting factor for system development and evaluation. Will there be an evaluation condition to process less than the 32K clip evaluation set?

Answer: No. Participants have the option to process fewer than 10 events per Section 2 of the Eval Plan.

FAQ #14: Can the detection threshold be unique for each event?

Answer: Yes per Section 2 of the Eval Plan under the detection threshold.

FAQ #15: When do the detection thresholds get set during the running of an evaluation system?

Answer: The detection threshold for each event should be set during CDR generation time of the development data.  The threshold can be re-tuned via automatic means during Event Agent Generation (EAG) by making use of the event kits and the development/test CDRs (i.e., not (re-)processing the clips).  The threshold can not be changed after search begins; however, confidence estimates for individual clips can be scaled to the threshold during search time.

FAQ #16: Why are clips in the DEVT directories of LDC2001E06 not found in the DEVT database files?

Answer: Two reasons: (1) Several DEVT clips were moved to Event Kits because they were judged as "related" clips for the event. (2) 12 of the files should have been listed as "banned" in the clip location file. The problem has been reported on the data reporting web site.

FAQ #17: What's the difference between the testing and training events?

Answer: The training events (E001-E005) have more positive instances, about double, than the testing events (E006-E015).  The extra positives were provided to help develop parameter tuning methods.  

FAQ #18: What is the difference between AutoEAG and SemiAutoEAG in the Evaluation Plan.

Answer: Automatic Event Agent Generation (AutoEAG) means that the Event Kit is processed entirely by automatic means without human intervention.  For instance, if the Event Agent is generated by a system that has a natural language processing engine to parse and extract content from the event description text, a multimedia content extraction engine to process the positive and related clips, and a statistical modeling system to build a model for the event, then it is an AutoEAG system.  If there is human intervention beyond in any of these (or other steps), it would a SemiAutoEAG system.

FAQ #19: In Section 3.3 of the Eval Plan, what does "Adaptation in the feature extraction process" mean?

Answer: It means that a feature extractor can change its behavior automatically.  For instance, if your acoustic event detector is able to automatically adjust the noise compensation algorithm parameters during the processing of a clip, this is allowed.

FAQ #20: Where are the minutes from the Q&A telecons posted?

Answer: No notes are posted. Comments and questions are added to this FAQ.

FAQ #21: Why are videos banned?

Answer: Videos that contain objectionable content, people's names, URLs, etc. are not included in the HAVIC corpus. If clips are found to contain this type of information after distribution to researchers, they are judged as "banned" from use and must be deleted from your system per the LDC license agreement that you signed.

FAQ #22: In Section 2.2.2 of the Eval Plan states "A MED system processes the metadata store detecting instances of each event and trial independently."  When deciding whether video X is a match for event 1 (i.e, it's score), can I use knowledge about video X's event 1 score, classification, etc., for event 2 (and/or 3 and/or 4 and/or...)?

Answer: No, you can not use knowledge of event 1 (either at the event kit level or the "event+trial" level) for any other event.  The entire processing of an event must be performed as if no other events have been defined or searched for.

FAQ #23: Section 2.2.2 of the Eval Plan states "A MED system processes the metadata store detecting instances of each event and trial independently."  When deciding whether video X is a match for event 1 (i.e, it's score), can I use knowledge about video X's event 1 score, classification, etc., for video Y's score for event 1.

Answer: Version V02 of the Eval Plan was incorrect.  Each trial (an event ID/clip decision) does not need to be processed independently.  For example, the system can adjust confidence scores by modeling the distribution of confidence scores for the collection. V03 of the Eval Plan now does not include "and trials" in this sentence.

FAQ #24: In Section 1 of the Eval Plan, it says "..., participants must use ONLY COTS standard personal computing platform(s) to generate event agents and run searches.", does this include the processing of the positive and related clips for the event kits?

Answer: Version V02 Eval Plan was incorrect.  The use of "COTS standard personal computing platform(s)" only applies to execution of the event agent not the event agent generation (EAG).  During EAG, the positive and related clips can be processed with the same hardware/software to generate a CDR. V03 of the Eval Plan has a corrected version of the text.

FAQ #25: Why does the submission checker disallow using a "+" or "_" in my team name when it is part of my official TRECVID short team name?

Answer: Using a "+" in an file name is not compatible with many file systems so the submission checker rejects it.  The "_" is the field separator for EXPIDs so the submission checker rejects it. Please either use a hyphen /-/ instead for either or do not use the "+" or "-".

 

Revision History

  • July 11, 2011 - Initial page created for FAQ #1-#12.
  • July 13, 2011- Added content to FAQ #7. Added FAQ #13-17
  • July 25, 2011 - Added FAQs #18-25. The texts for #2,#5. #9 were clarified. (see the bold content)
  • August 22, 2011 - Added the answer for FAQ #15.
Created July 11, 2011, Updated June 2, 2021