Title |
Some Examinations of Intrinsic Methods for Summary Evaluation Based on the Text Summarization Challenge (TSC) |
Authors |
Hidetsugu Nanba (Precision and Intelligence Lab., Tokyo Institute of Technology) Manabu Okumura (Precision and Intelligence Lab., Tokyo Institute of Technology) |
Session |
EO3: Written Systems Evaluation |
Abstract |
Computer-produced summaries have traditionally been evaluated by comparing them with human-produced summaries using the F-measure. However, the F-measure is not appropriate when alternative sentences are possible in a human-produced extract. In this paper, we examine some evaluation methods devised to overcome the problem, including utility-based evaluation. By giving scores for moderately important sentences that does not appear in the human-produced extract, utility-based evaluation can resolve the problem. However, the method requires much effort from humans to provide data for evaluation. In this paper, we first propose a pseudo-utility-based evaluation that uses human-produced extracts at different compression ratios. To evaluate the effectiveness of pseudo-utility-based evaluation, we compare our method and the F-measure using the data of the Text Summarization Challenge (TSC), and show that pseudo-utility-based evaluation can resolve this problem. Next, we focus on content-based evaluation. Instead of measuring the ratio of sentences that match exactly in the extract, the method evaluates extracts by comparing their content words to those of human-produced extracts. Although the method has been reported to be effective in resolving the problem, it has not been examined in the context of comparing two extracts produced from different systems. We evaluated computer-produced summaries by content-based evaluation, and compared the results with a subjective evaluation. We found that the evaluation by content-based measure matched those by subjective evaluation in 93\% of the cases, if the gap in content-based scores between two summaries is more than 0.2. |
Keywords |
Pseudo-Utility-Based evaluation, The f-measure, Content-Based evaluation, Automatic text summarization, Intrinsic method, TSC, NTCIR |
Full Paper |