Zobrazeno 1 - 10
of 806
pro vyhledávání: '"BAI, Xuefeng"'
With the great advancements in large language models (LLMs), adversarial attacks against LLMs have recently attracted increasing attention. We found that pre-existing adversarial attack methodologies exhibit limited transferability and are notably in
Externí odkaz:
http://arxiv.org/abs/2408.13985
Autor:
Chen, Andong, Lou, Lianzhang, Chen, Kehai, Bai, Xuefeng, Xiang, Yang, Yang, Muyun, Zhao, Tiejun, Zhang, Min
Large language models (LLMs) have shown remarkable performance in general translation tasks. However, the increasing demand for high-quality translations that are not only adequate but also fluent and elegant. To assess the extent to which current LL
Externí odkaz:
http://arxiv.org/abs/2408.09945
Autor:
Chen, Yulong, Liu, Yang, Yan, Jianhao, Bai, Xuefeng, Zhong, Ming, Yang, Yinghao, Yang, Ziyi, Zhu, Chenguang, Zhang, Yue
The impressive performance of Large Language Models (LLMs) has consistently surpassed numerous human-designed benchmarks, presenting new challenges in assessing the shortcomings of LLMs. Designing tasks and finding LLMs' limitations are becoming incr
Externí odkaz:
http://arxiv.org/abs/2408.08978
Autor:
Jiang, Ruili, Chen, Kehai, Bai, Xuefeng, He, Zhixuan, Li, Juntao, Yang, Muyun, Zhao, Tiejun, Nie, Liqiang, Zhang, Min
The recent surge of versatile large language models (LLMs) largely depends on aligning increasingly capable foundation models with human intentions by preference learning, enhancing LLMs with excellent applicability and effectiveness in a wide range
Externí odkaz:
http://arxiv.org/abs/2406.11191
Autor:
Chen, Andong, Lou, Lianzhang, Chen, Kehai, Bai, Xuefeng, Xiang, Yang, Yang, Muyun, Zhao, Tiejun, Zhang, Min
Recently, large language models (LLMs) enhanced by self-reflection have achieved promising performance on machine translation. The key idea is guiding LLMs to generate translation with human-like feedback. However, existing self-reflection methods la
Externí odkaz:
http://arxiv.org/abs/2406.07232
Large language models (LLMs) have showcased impressive multilingual machine translation ability. However, unlike encoder-decoder style models, decoder-only LLMs lack an explicit alignment between source and target contexts. Analyzing contribution sco
Externí odkaz:
http://arxiv.org/abs/2406.07036
Constituency parsing is a fundamental yet unsolved natural language processing task. In this paper, we explore the potential of recent large language models (LLMs) that have exhibited remarkable performance across various domains and tasks to tackle
Externí odkaz:
http://arxiv.org/abs/2310.19462
Autor:
Chen, Yulong, Zhang, Huajian, Zhou, Yijie, Bai, Xuefeng, Wang, Yueguan, Zhong, Ming, Yan, Jianhao, Li, Yafu, Li, Judy, Zhu, Michael, Zhang, Yue
Most existing cross-lingual summarization (CLS) work constructs CLS corpora by simply and directly translating pre-annotated summaries from one language to another, which can contain errors from both summarization and translation processes. To addres
Externí odkaz:
http://arxiv.org/abs/2307.04018
Autor:
Wang, Cunxiang, Xu, Zhikun, Guo, Qipeng, Hu, Xiangkun, Bai, Xuefeng, Zhang, Zheng, Zhang, Yue
The Open-Domain Question Answering (ODQA) task involves retrieving and subsequently generating answers from fine-grained relevant passages within a database. Current systems leverage Pretrained Language Models (PLMs) to model the relationship between
Externí odkaz:
http://arxiv.org/abs/2305.17050
Representation forgetting refers to the drift of contextualized representations during continual training. Intuitively, the representation forgetting can influence the general knowledge stored in pre-trained language models (LMs), but the concrete ef
Externí odkaz:
http://arxiv.org/abs/2305.05968