Readability Evaluation Metrics for Indonesian Automatic Text Summarization: A Systematic Review.

Autor: Maylawati, Dian Sa'adillah, Kumar, Yogan Jaya, Kasmin, Fauziah Binti, Ramdhani, Muhammad Ali
Předmět:
Zdroj: Journal of Engineering Science & Technology Review; 2024, Vol. 17 Issue 5, p199-210, 12p
Abstrakt: Producing a readable summary from an automatic text summarization system is a big challenge, especially for the Indonesian language. The readability of the generated summary in automatic text summarization is important to reach a quality text summary that is easy to understand. Therefore, this research aims to prepare and investigate the evaluation metrics of the readability aspect of automatic Indonesian text summary results. This research used PRISMA 2020 to conduct a systematic review. We searched Elsevier (SCOPUS), Web of Science, Google Scholar, Science and Technology Index (SINTA), IEEE Xplore, arXiv, and forward and backward references for studies published about readability evaluation for automatic text summarization in the last five years until July 2022. We found that completed readability evaluation in automatic text summarization studies, especially for Indonesian text, is rarely measured. Most studies (94,23% of 52) only use co-selection-based analysis. However, co-selection-based analysis is not adequate to evaluate the readability, it needs content-based analysis and human evaluation. Therefore, this study contributes to the design of the concept of readability evaluation metrics based on a systematic review of Indonesian automatic text summarization and readability evaluation for Indonesian text. This research gives benefits to provides a foundation for future studies to build upon, offering a clear direction for developing and evaluating readability metrics in automatic text summarization, not just for Indonesian, but for other languages facing similar challenges. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index