Traditional editorial effectiveness measures, such as nDCG, remain standard for Web search evaluation. Unfortunately, these traditional measures can inappropriately reward re- dundant information and can fail to reflect the broad range of user needs that can underlie a Web query. To address these deficiencies, several researchers have recently proposed effectiveness measures for novelty and diversity. Many of these measures are based on simple cascade models of user behavior, which operate by considering the relationship be- tween successive elements of a result list. The properties of these measures are still poorly understood, and it is not clear from prior research that they work as intended. In this paper we examine the properties and performance of cascade measures with the goal of validating them as tools for measuring effectiveness. We explore their commonalities and differences, placing them in a unified framework; we dis- cuss their theoretical difficulties and limitations, and com- pare the measures experimentally, contrasting them against traditional measures and against other approaches to mea- suring novelty. Data collected by the TREC 2009 Web Track is used as the basis for our experimental comparison. Our results indicate that these measures reward systems that achieve an balance between novelty and overall precision in their result lists, as intended. Nonetheless, other measures provide insights not captured by the cascade measures, and we suggest that future evaluation efforts continue to report a variety of measures.
Proceedings Title: Proceedings of the Fourth ACM International Conference on Web Search and Data Mining (WSDM 2011)
Conference Dates: February 9-12, 2011
Conference Location: Hong Kong, -1
Conference Title: The Fourth ACM International Conference on Web Search and Data Mining (WSDM 2011)
Pub Type: Conferences
information retrieval, search evaluation, effectiveness measures, diversity ranking