Autor: |
ZHANG Xiaoyan, PANG Lei, DU Xiaofeng, LU Tianbo, XIA Yamei |
Jazyk: |
čínština |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
Tongxin xuebao, Vol 45, Pp 65-72 (2024) |
Druh dokumentu: |
article |
ISSN: |
1000-436X |
DOI: |
10.11959/j.issn.1000-436x.2024066 |
Popis: |
To enhance the performance of neural machine translation (NMT) and ameliorate the detrimental impact of high uncertainty in monolingual data during the self-training process, a self-training NMT model based on priority sampling was proposed. Initially, syntactic dependency trees were constructed and the importance of monolingual tokenization was assessed using grammar dependency analysis. Subsequently, a monolingual lexicon was built, and priority was defined based on the importance of monolingual tokenization and uncertainty. Finally, monolingual priorities were computed, and sampling was carried out based on these priorities, consequently generating a synthetic parallel dataset for training the student NMT model. Experimental results on a large-scale subset of the WMT English to German dataset demonstrate that the proposed model effectively enhances NMT translation performance and mitigates the impact of high uncertainty on the model. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|