Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Publications by: Ellen M. Voorhees (Assoc)

Search Title, Abstract, Conference, Citation, Keyword or Author
Displaying 26 - 50 of 85

TREC Genomics Special Issue---Information Retrieval

November 10, 2008
Author(s)
Ellen M. Voorhees, William Hersh
This paper is the introductory paper for a special issue of the journal Information Retrieval devoted to the TREC genomics track. The TREC genomics track ran from 2003 to 2007 and was one of the largest and longest-running challenge evaluations in

On Test Collections for Adaptive Information Retrieval

March 17, 2008
Author(s)
Ellen M. Voorhees
Traditional Cranfield test collections represent an abstraction of a retrieval task that Sparck Jones calls the core competency of retrieval: a task that is necessary, but not sufficient, for user retrieval tasks. The abstraction facilitates research by

Overview of TREC 2006

November 26, 2007
Author(s)
Ellen M. Voorhees
This report provides an overview of the fifteenth Text REtrieval Conference, TREC 2006. TREC 2006 was held at NIST on November 16-19, 2006. The conference was cosponsored by NIST and the Disruptive Technology Office (DTO). The conference attracted 107

TREC: Continuing Information Retrieval's Tradition of Experimentation

November 26, 2007
Author(s)
Ellen M. Voorhees
This article provides a high-level overview of the history, results and impact of NIST's Text REtrieval Conference (TREC). The article will appear in a special issue of the Communications of the ACM devoted to experimentation in computer science.

Bias and the Limits of Pooling for Large Collections

July 17, 2007
Author(s)
C E. Buckley, Darrin L. Dimmick, Ian Soboroff, Ellen M. Voorhees
Modern retrieval test collections are built through a process called pooling in which only a sample of the entire document set is judged for each topic. The idea behind pooling is to find enough relevant documents such that when unjudged documents are

The Fifth Text Retrival Conference [TREC-5]

October 30, 2006
Author(s)
Ellen M. Voorhees, Donna K. Harman
This paper is the track report for the TREC-5 confusion track. For TREC-5, retrieval from corrupted data was studied through retrieval of specific target documents from a corpus that was corrupted by applying OCR techniques to page images of varying

Overview of the TREC 2005 Robust Retrieval Track

October 16, 2006
Author(s)
Ellen M. Voorhees
The robust retrieval track explores methods for improving the consistency of retrieval technology by focusing on poorly performing topics. The retrieval task in the track is a traditional ad~hoc retrieval task where the evaluation methodology emphasizes a

Overview of the TREC 2005 Question Answering Track

October 2, 2006
Author(s)
Ellen M. Voorhees, Hoa T. Dang
The TREC 2005 Question Answering (QA) track contained three tasks: the main question answering task, the document ranking task, and the relationship task. In the main task, question series were used to define a set of targets. Each series was about a

Overview of TREC 2005

May 17, 2006
Author(s)
Ellen M. Voorhees
The fourteenth Text REtrieval Conference, TREC 2005, was held at the National Institute of Standards and Technology (NIST) November 15--18, 2005. The conference was co-sponsored by NIST and the US Department of Defense Advanced Research and Development

TREC: An Overview

February 17, 2006
Author(s)
Donna K. Harman, Ellen M. Voorhees
The Text REtrieval Conference (TREC) is a workshop series designed to build the infrastructure necessary for large-scale evaluation of text retrieval technology. Participants in the workshops (over 100 groups in the latest TREC) have been drawn from the

Report on the TREC-5 Confusion Track

October 24, 2005
Author(s)
Paul B. Kantor, Ellen M. Voorhees
This paper is the track report for the TREC-5 confusion track. For TREC-5, retrieval from corrupted data was studied through retrieval of specific target documents from a corpus that was corrupted by applying OCR techniques to page images of varying

The TREC-5 Database Merging Track

October 24, 2005
Author(s)
Ellen M. Voorhees
This paper is the track report for the TREC-5 database merging track. The report motivates the database merging problem, defines the task performed by the participants in detail, and summarizes the main results of the track.

IT: The Thirteenth Text Retrieval Conference, TREC 2004

October 3, 2005
Author(s)
Ellen M. Voorhees
This report provides an overview of the thirteenth Text Retrieval Conference, TREC 2004. TREC 2004 was held at the National Institute of Standards and Technology (NIST) November~16--19, 2004. The conference was co-sponsored by NIST, the US Department of

Retrieval System Evaluation

September 26, 2005
Author(s)
C E. Buckley, Ellen M. Voorhees
One of the primary motivations for TREC was to standardize retrieval system evaluation. Prior to TREC, there was little explicit discussion of what constituted a minimally acceptable experimental design, and no hard evidence to support any position. TREC

Overview of the TREC 2004 Question Answering Track

August 1, 2005
Author(s)
Ellen M. Voorhees
The TREC 2004 Question Answering track contained a single task in which question series were used to define a set of targets. Each series contained factoid and list questions and related to a single target. The final question in the series was an Other

Overview of the TREC 2004 Robust Retrieval Track

August 1, 2005
Author(s)
Ellen M. Voorhees
The robust retrieval track explores methods for improving the consistency of retrieval technology by focusing on poorly performing topics. The retrieval task in the track is a traditional ad hoc retrieval task where the evaluation methodology emphasizes a

Overview of TREC 2004

August 1, 2005
Author(s)
Ellen M. Voorhees
This report provides an overview of the thirteenth Text REtrieval Conference, TREC 2004. TREC 2004 was held at the National Institute of Standards and Technology (NIST) November~16--19, 2004. The conference was co-sponsored by NIST, the U.S. Department of

IT--The Twelfth Text Retrieval Conference, TREC 2003

October 25, 2004
Author(s)
Ellen M. Voorhees, Donna K. Harman
This chapter provides an executive summary of the TREC workshop series and the remainder of thevolume. It explains the motivation for TREC and highlights TREC's accomplishments in improving retrievaleffectiveness and fostering technology transfer.

Retrieving Noisy Text

September 26, 2004
Author(s)
Ellen M. Voorhees, John S. Garofolo
Two tracks within TREC have examined the problem of retrieving noisy documents---documents whose content is not necessarily a faithful representation of the author's intent. The confusion track tested the ability of system to retrieve documents that were

Retrieval Evaluation with Incomplete Information

July 1, 2004
Author(s)
C E. Buckley, Ellen M. Voorhees
This paper examines whether the Cranfield evaluation methodology is robust to gross violations of the completeness assumption (i.e., the assumption that all relevant documents within a test collection have been identified and are present in the collection)

Overview of the TREC 2003 Question Answering Track

March 1, 2004
Author(s)
Ellen M. Voorhees
The TREC 2003 question answering track contained two tasks, the passages task and the main task. In the passages task, systems returned a single text snippet in response to factoid questions; the evaluation metric was the number of snippets that contained