Autor: |
Ming, Lingfeng, Zeng, Bo, Lyu, Chenyang, Shi, Tianqi, Zhao, Yu, Yang, Xue, Liu, Yefeng, Wang, Yiyu, Xu, Linlong, Liu, Yangyang, Zhao, Xiaohu, Wang, Hao, Liu, Heng, Zhou, Hao, Yin, Huifeng, Shang, Zifu, Li, Haijun, Wang, Longyue, Luo, Weihua, Zhang, Kaifu |
Rok vydání: |
2024 |
Předmět: |
|
Druh dokumentu: |
Working Paper |
Popis: |
Large Language Models (LLMs) have achieved remarkable progress in recent years; however, their excellent performance is still largely limited to major world languages, primarily English. Many LLMs continue to face challenges with multilingual tasks, especially when it comes to low-resource languages. To address this issue, we introduced Marco-LLM: Massive multilingual training for cross-lingual enhancement LLM. We have collected a substantial amount of multilingual data for several low-resource languages and conducted extensive continual pre-training using the Qwen2 models. This effort has resulted in a multilingual LLM named Marco-LLM. Through comprehensive evaluations on various multilingual benchmarks, including MMMLU, AGIEval, Belebele, Flores-200, XCOPA and many others, Marco-LLM has demonstrated substantial improvements over state-of-the-art LLMs. Furthermore, Marco-LLM achieved substantial enhancements in any-to-any machine translation tasks, showing the effectiveness of our multilingual LLM. Marco-LLM is a pioneering multilingual LLM designed to not only perform exceptionally well in multilingual tasks, including low-resource languages, but also maintain strong performance in English and other major languages, closing the performance gap between high- and low-resource language capabilities. By bridging languages, this effort demonstrates our dedication to ensuring LLMs work accurately across various languages. |
Databáze: |
arXiv |
Externí odkaz: |
|