Automatical sampling with heterogeneous corpora for grammatical error correction
Autor: | Shichang Zhu, Jianjian Liu, Ying Li, Zhengtao Yu |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2024 |
Předmět: | |
Zdroj: | Complex & Intelligent Systems, Vol 11, Iss 1, Pp 1-11 (2024) |
Druh dokumentu: | article |
ISSN: | 2199-4536 2198-6053 |
DOI: | 10.1007/s40747-024-01653-3 |
Popis: | Abstract Thanks to the strong representation capability of the pre-trained language models, supervised grammatical error correction has achieved promising performance. However, traditional model training depends significantly on the large scale of similar distributed samples. The model performance decreases sharply once the distributions of training and testing data are inconsistent. To address this issue, we propose an automatic sampling approach to effectively select high-quality samples from different corpora and filter out irrelevant or harmful ones. Concretely, we first provide a detailed analysis of error type and sentence length distributions on all datasets. Second, our corpus weighting approach is exploited to yield different weights for each sample automatically based on analysis results, thus emphasizing beneficial samples and ignoring the noisy ones. Finally, we enhance typical Seq2Seq and Seq2Edit grammatical error correction models with pre-trained language models and design a model ensemble algorithm for integrating the advantages of heterogeneous models and weighted samples. Experiments on the benchmark datasets demonstrate that the proper utilization of different corpora is extremely helpful in enhancing the accuracy of grammatical error correction. The detailed analysis gains more insights into the effect of different corpus weighting strategies. |
Databáze: | Directory of Open Access Journals |
Externí odkaz: |