Take a sneak peek at the new NIST.gov and let us know what you think!
(Please note: some content may not be complete on the beta site.).
NIST Authors in Bold
|Author(s):||Hoa T. Dang; John M. Conroy;|
|Title:||Mind the Gap: Dangers of Divorcing Evaluations of Summary Content from Linguistic Quality|
|Published:||August 18, 2008|
|Abstract:||In this paper, we analyze the state of current human and automatic evaluation of topic-focused summarization in the Document Understanding Conference main task for 2005-2007. The analyses show that while ROUGE has very strong correlation with responsiveness for both human and automatic summaries, there is a significant gap in responsiveness between humans and systems which is not accounted for by the ROUGE metrics. In addition to teasing out gaps in the current automatic evaluation, we propose a method to maximize the strength of current automatic evaluations by using the method of canonical correlation. We apply this new evaluation method, which we call ROSE (ROUGE Optimal Summarization Evaluation), to find the optimal linear combination of ROUGE scores to maximize correlation with human responsiveness.|
|Proceedings:||COLING 2008: Proceedings of the 22nd International Conference on Computational Linguistics|
|Dates:||August 18-22, 2008|
|Research Areas:||Data and Informatics|
|PDF version:||Click here to retrieve PDF version of paper (403KB)|