Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Publications by: Ellen M. Voorhees (Assoc)

Search Title, Abstract, Conference, Citation, Keyword or Author
Displaying 126 - 150 of 170

Overview of the TREC 2001 Question Answering Track

November 26, 2001
Author(s)
Ellen M. Voorhees
The TREC question answering track is an effort to bring the benefits of large-scale evaluation to bear on the question answering problem. In its third year, the track continued to focus on retrieving small snippets of text that contain an answer to a

Question Answering in TREC

November 1, 2001
Author(s)
Ellen M. Voorhees
The Text REtrieval Conference (TREC) question answering track is an effort to bring the benefits of large-scale evaluation to bear on a question answering (QA) task. The track has run twice so far, first in TREC-8 and again in TREC-9. In each case the goal

The TREC Question Answering Track

October 29, 2001
Author(s)
Ellen M. Voorhees
The Text REtrieval Conference (TREC) question answering track is an effort to bring the benefits of large-scale evaluation to bear on a question answering (QA) task. The track has run twice so far, first in TREC-8 and again in TREC-9. In each case the goal

The Ninth Text REtrieval Conference (TREC-9)

October 1, 2001
Author(s)
Ellen M. Voorhees, Donna K. Harman
This paper provides an overview of the ninth Text REtrieval Conference (TREC-9) held in Gaithersburg, Maryland, November 13-16, 2000. TREC-9 is the latest in a series of workshops designed to foster research in text retrieval. This year's conference

The Philosophy of Information Retrieval Evaluation

September 24, 2001
Author(s)
Ellen M. Voorhees
Evaluation conferences such as TREC, CLEF, and NTCIR are modern examples of the Cranfield evaluation paradigm. In the Cranfield paradigm, researchers perform experiments on test collections to compare the relative effectiveness of different retrieval

Overview of the TREC-9 Question Answering Track

September 3, 2001
Author(s)
Ellen M. Voorhees
The TREC question answering track is an effort to bring the benefits of large-scale evaluation to bear on the question answering problem. The track has run twice so far, where the goal both times was to retrieve small snippets of text that contain the

Evaluation by Highly Relevant Documents

January 1, 2001
Author(s)
Ellen M. Voorhees
Given the size of the web, the search engine industry has argued that engines should be evaluated by their ability to retrieve highlyrelevant pages rather than all possible relevant pages. To explore the role highly relevant documents play in retrieval

The TREC-8 Question Answering Track Report

December 11, 2000
Author(s)
Ellen M. Voorhees
The TREC-8 Question Answering track was the first large-scale evaluationof domain-independent question answering systems.This paper summarizes the results of the track by giving a briefoverview of the different approaches taken to solve the problem.The

Overview of the TREC-8 Web Track

November 1, 2000
Author(s)
D Hawking, Ellen M. Voorhees, Nick Craswell, Peter Bailey
The TREC-8 Web Track defined ad hoc retrieval tasks over a 100 gigabyte collection of spidered Web documents (Large Web Task) and a selected 2 gigabyte subset of those documents (Small Web Task). Here, the guidelines and resources for both tasks are

The Eighth Text REtrieval Conference (TREC-8)

November 1, 2000
Author(s)
Ellen M. Voorhees, Donna K. Harman
This report constitutes the proceedings of the eighth Text REtrieval Conference (TREC-8) held in Gaithersburg, Maryland, November 16, 1999. The conference was co-sponsored by the National Institute of Standards and Technology (NIST) and the Defense

Building a Question Answering Test Collection

July 1, 2000
Author(s)
Ellen M. Voorhees, D M. Tice
The TREC-8 Question Answering (QA) Track was the first large-scale evaluation of domain-independent question answering systems. In addition to fostering research on the QA task, the track was used to investigate whether the evaluation methodology used for

Evaluating Evaluation Measure Stability

July 1, 2000
Author(s)
C E. Buckley, Ellen M. Voorhees
This paper presents a novel way of examining the accuracy of the evaluation measures commonly used in information retrieval experiments. It validates several of the rules-of-thumb experimenters use, such as the number of queries needed for a good

The TREC-8 Question Answering Track Evaluation

May 1, 2000
Author(s)
Ellen M. Voorhees, D M. Tice
The TREC-8 Question Answering track was the first large-scale evaluation of systems that return answers, as opposed to lists of documents, in response to a question. As a first evaluation, it is important to examine the evaluation methodology itself to