A Survey on evaluation of summarization methods, Information Processing & Management

Abstract : The increasing volume of textual information on any topic requires its compression to allow humans to digest it. This implies detecting the most important information and condensing it. These challenges have led to new developments in the area of Natural Language Processing (NLP) and Information Retrieval (IR) such as narrative summarization and evaluation methodologies for narrative extraction. Despite some progress over recent years with several solutions for information extraction and text summarization, the problems of generating consistent narrative summaries and evaluating them are still unresolved. With regard to evaluation, manual assessment is expensive, subjective and not applicable in real time or to large collections. Moreover, it does not provide re-usable benchmarks. Nevertheless, commonly used metrics for summary evaluation still imply substantial human effort since they require a comparison of candidate summaries with a set of reference summaries. The contributions of this paper are three-fold. First, we provide a comprehensive overview of existing metrics for summary evaluation. We discuss several limitations of existing frameworks for summary evaluation. Second, we introduce an automatic framework for the evaluation of metrics that does not require any human annotation. Finally, we evaluate the existing assessment metrics on a Wikipedia data set and a collection of scientific articles using this framework. Our findings show that the majority of existing metrics based on vocabulary overlap are not suitable for assessment based on comparison with a full text and we discuss this outcome.
Document type :
Journal articles
Complete list of metadatas

https://hal.univ-brest.fr/hal-02130700
Contributor : Emmanuelle Bourge <>
Submitted on : Thursday, May 16, 2019 - 9:23:49 AM
Last modification on : Thursday, October 17, 2019 - 8:52:56 AM

Identifiers

  • HAL Id : hal-02130700, version 1

Citation

Liana Ermakova, Jean-Valère Cossu, Josiane Mothe. A Survey on evaluation of summarization methods, Information Processing & Management. Information processing & management, [Oxford]: Elsevier Ltd., 2019. ⟨hal-02130700⟩

Share

Metrics

Record views

110