Mind the Gap: Dangers of Divorcing Evaluations of Summary Content from Linguistic Quality
Hoa T. Dang, John M. Conroy
In this paper, we analyze the state of current human and automatic evaluation of topic-focused summarization in the Document Understanding Conference main task for 2005-2007. The analyses show that while ROUGE has very strong correlation with responsiveness for both human and automatic summaries, there is a significant gap in responsiveness between humans and systems which is not accounted for by the ROUGE metrics. In addition to teasing out gaps in the current automatic evaluation, we propose a method to maximize the strength of current automatic evaluations by using the method of canonical correlation. We apply this new evaluation method, which we call ROSE (ROUGE Optimal Summarization Evaluation), to find the optimal linear combination of ROUGE scores to maximize correlation with human responsiveness.
COLING 2008: Proceedings of the 22nd International Conference on Computational Linguistics
and Conroy, J.
Mind the Gap: Dangers of Divorcing Evaluations of Summary Content from Linguistic Quality, COLING 2008: Proceedings of the 22nd International Conference on Computational Linguistics, Manchester, GB, [online], https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=152186
(Accessed November 30, 2023)