Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

On Test Collections for Adaptive Information Retrieval

Published

Author(s)

Ellen M. Voorhees

Abstract

Traditional Cranfield test collections represent an abstraction of a retrieval task that Sparck Jones calls the core competency of retrieval: a task that is necessary, but not sufficient, for user retrieval tasks.  The abstraction facilitates research by controlling for (some) sources of variability, thus increasing the power of experiments that compare system effectiveness while reducing their cost.  However, even within the highly-abstracted case of the Cranfield paradigm, meta-analysis demonstrates that the user/topic effect is greater than the system effect, so experiments must include relatively large number of topics to distinguish systems' effectiveness.  The evidence further suggests that changing the abstraction slightly to include just a bit more characterization of the user will result in a dramatic loss of power or increase in cost of retrieval experiments.  Defining a new, feasible abstraction for supporting adaptive IR research will require winnowing the list of all possible factors that can affect retrieval behavior to a minimum number of essential factors.
Citation
Information Processing and Management

Keywords

information knowledge management, information retrieval evaluation, search

Citation

Voorhees, E. (2008), On Test Collections for Adaptive Information Retrieval, Information Processing and Management (Accessed December 8, 2024)

Issues

If you have any questions about this publication or are having problems accessing it, please contact reflib@nist.gov.

Created March 17, 2008, Updated February 19, 2017