Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Publications by: Ellen M. Voorhees (Assoc)

Search Title, Abstract, Conference, Citation, Keyword or Author
Displaying 51 - 75 of 85

Overview of the TREC 2003 Robust Retrieval Track

March 1, 2004
Author(s)
Ellen M. Voorhees
The robust retrieval track is a new track in TREC 2003. The goal of the track is to improve the consistency of retrieval technology by focusing on poorly performing topics. In addition, the track brings back a classic, ad hoc retrieval task to TREC that

The Eleventh Text REtrieval Conference (TREC-11)

May 1, 2003
Author(s)
Ellen M. Voorhees, Donna K. Harman
The Eleventh Text Retrieval Conference was held in Gaithersburg, Maryland, November 19-22, 2002. TREC 2002 is the latest in a series of workshops designed to foster research in information retrieval and related tasks. This year's conference consisted of

Overview of the TREC 2002 Question Answering Track

April 1, 2003
Author(s)
Ellen M. Voorhees
The TREC question answering track is an effort to bring the benefits of large-scale evaluation to bear on the question answering problem. The track contained two tasks in TREC 2002, the main task and the list task. Both tasks required that the answer

The Eleventh Text Retrieval Conference (TREC 2002)

April 1, 2003
Author(s)
Ellen M. Voorhees
TREC 2002 is the latest in a series of workshops designed to foster research in information retrieval and related tasks. This year's conference consisted of seven different tasks: cross-language retrieval, filtering, interactive retrieval, novelty

Advanced Question Answering for Intelligence AQUAINT Website

October 1, 2002
Author(s)
Ellen M. Voorhees
This paper demonstrates that, for large-scale tests, the match and non-match similarity scores have no specific underlying distribution function. The forms of these distribution functions require a nonparametric approach for the analysis of the fingerprint

The Effect of Topic Set Size on Retrieval Experiment Error

August 1, 2002
Author(s)
Ellen M. Voorhees, C E. Buckley
Retrieval mechanisms are frequently compared by computing the respective average scores for some effectiveness metric across a common set of information needs or topics. Since retrieval system behavior is known to be highly variable across topics, good

Document Understanding Conferences Website

July 1, 2002
Author(s)
Ellen M. Voorhees
This paper demonstrates that, for large-scale tests, the match and non-match similarity scores have no specific underlying distribution function. The forms of these distribution functions require a nonparametric approach for the analysis of the fingerprint

Overview of the TREC 2001 Question Answering Track

April 1, 2002
Author(s)
Ellen M. Voorhees
The TREC question answering track is an effort to bring the benefits of large-scale evaluation to bear on the question answering problem. In its third year, the track continued to focus on retrieving small snippets of text that contain an answer to a

The Tenth Text Retrieval Conference, TREC- 2001

April 1, 2002
Author(s)
Ellen M. Voorhees, Donna K. Harman
TREC 2001 is the latest in a series of workshops designed to foster research in information retrieval and related tasks. This year's conference consisted of six different tasks, including a new task on content-based retrieval of digital video. The overview

The Philosophy of Information Retrieval Evaluation

January 1, 2002
Author(s)
Ellen M. Voorhees
Evaluation conferences such as TREC, CLEF, and NTCIR are modern examples of the Cranfield evaluation paradigm. In the Cranfield paradigm, researchers perform experiments on test collections to compare the relative effectiveness of different retrieval

Overview of the TREC 2001 Question Answering Track

November 26, 2001
Author(s)
Ellen M. Voorhees
The TREC question answering track is an effort to bring the benefits of large-scale evaluation to bear on the question answering problem. In its third year, the track continued to focus on retrieving small snippets of text that contain an answer to a

Question Answering in TREC

November 1, 2001
Author(s)
Ellen M. Voorhees
The Text REtrieval Conference (TREC) question answering track is an effort to bring the benefits of large-scale evaluation to bear on a question answering (QA) task. The track has run twice so far, first in TREC-8 and again in TREC-9. In each case the goal

The TREC Question Answering Track

October 29, 2001
Author(s)
Ellen M. Voorhees
The Text REtrieval Conference (TREC) question answering track is an effort to bring the benefits of large-scale evaluation to bear on a question answering (QA) task. The track has run twice so far, first in TREC-8 and again in TREC-9. In each case the goal

The Ninth Text REtrieval Conference (TREC-9)

October 1, 2001
Author(s)
Ellen M. Voorhees, Donna K. Harman
This paper provides an overview of the ninth Text REtrieval Conference (TREC-9) held in Gaithersburg, Maryland, November 13-16, 2000. TREC-9 is the latest in a series of workshops designed to foster research in text retrieval. This year's conference

The Philosophy of Information Retrieval Evaluation

September 24, 2001
Author(s)
Ellen M. Voorhees
Evaluation conferences such as TREC, CLEF, and NTCIR are modern examples of the Cranfield evaluation paradigm. In the Cranfield paradigm, researchers perform experiments on test collections to compare the relative effectiveness of different retrieval

Overview of the TREC-9 Question Answering Track

September 3, 2001
Author(s)
Ellen M. Voorhees
The TREC question answering track is an effort to bring the benefits of large-scale evaluation to bear on the question answering problem. The track has run twice so far, where the goal both times was to retrieve small snippets of text that contain the

Evaluation by Highly Relevant Documents

January 1, 2001
Author(s)
Ellen M. Voorhees
Given the size of the web, the search engine industry has argued that engines should be evaluated by their ability to retrieve highlyrelevant pages rather than all possible relevant pages. To explore the role highly relevant documents play in retrieval

The TREC-8 Question Answering Track Report

December 11, 2000
Author(s)
Ellen M. Voorhees
The TREC-8 Question Answering track was the first large-scale evaluationof domain-independent question answering systems.This paper summarizes the results of the track by giving a briefoverview of the different approaches taken to solve the problem.The

Overview of the TREC-8 Web Track

November 1, 2000
Author(s)
D Hawking, Ellen M. Voorhees, Nick Craswell, Peter Bailey
The TREC-8 Web Track defined ad hoc retrieval tasks over a 100 gigabyte collection of spidered Web documents (Large Web Task) and a selected 2 gigabyte subset of those documents (Small Web Task). Here, the guidelines and resources for both tasks are

The Eighth Text REtrieval Conference (TREC-8)

November 1, 2000
Author(s)
Ellen M. Voorhees, Donna K. Harman
This report constitutes the proceedings of the eighth Text REtrieval Conference (TREC-8) held in Gaithersburg, Maryland, November 16, 1999. The conference was co-sponsored by the National Institute of Standards and Technology (NIST) and the Defense

Building a Question Answering Test Collection

July 1, 2000
Author(s)
Ellen M. Voorhees, D M. Tice
The TREC-8 Question Answering (QA) Track was the first large-scale evaluation of domain-independent question answering systems. In addition to fostering research on the QA task, the track was used to investigate whether the evaluation methodology used for

Evaluating Evaluation Measure Stability

July 1, 2000
Author(s)
C E. Buckley, Ellen M. Voorhees
This paper presents a novel way of examining the accuracy of the evaluation measures commonly used in information retrieval experiments. It validates several of the rules-of-thumb experimenters use, such as the number of queries needed for a good

The TREC-8 Question Answering Track Evaluation

May 1, 2000
Author(s)
Ellen M. Voorhees, D M. Tice
The TREC-8 Question Answering track was the first large-scale evaluation of systems that return answers, as opposed to lists of documents, in response to a question. As a first evaluation, it is important to examine the evaluation methodology itself to