Zobrazeno 1 - 10
of 1 521
pro vyhledávání: '"Liu Xuebo"'
Autor:
Luo Changli, Li Wuyuan, Yang Bo, Su Youwu, Li Yang, Khasanova Shakhboz, Mao Wang, Liu Xuebo, Yan Weiwei, Li Zongqiang
Publikováno v:
Nuclear Technology and Radiation Protection, Vol 38, Iss 1, Pp 39-47 (2023)
Heavy-ion radiotherapy is currently recognized as the most advanced particle therapy method and is being vigorously promoted and applied worldwide. This method can rapidly generate radiation and induce radioactivity during treatment. However, the i
Externí odkaz:
https://doaj.org/article/43585450f040440d80eef84c91c2a961
Large language models (LLMs) have achieved reasonable quality improvements in machine translation (MT). However, most current research on MT-LLMs still faces significant challenges in maintaining translation consistency and accuracy when processing e
Externí odkaz:
http://arxiv.org/abs/2410.08143
In spite of the outstanding performance, Neural Architecture Search (NAS) is criticized for massive computation. Recently, Zero-shot NAS has emerged as a promising approach by exploiting Zero-cost (ZC) proxies, which markedly reduce computational dem
Externí odkaz:
http://arxiv.org/abs/2410.04808
Large language models (LLMs) exhibit remarkable performance across diverse tasks, indicating their potential for expansion into large speech-text models (LSMs) by integrating speech capabilities. Although unified speech-text pre-training and multimod
Externí odkaz:
http://arxiv.org/abs/2410.03798
With instruction tuning, Large Language Models (LLMs) can enhance their ability to adhere to commands. Diverging from most works focusing on data mixing, our study concentrates on enhancing the model's capabilities from the perspective of data sampli
Externí odkaz:
http://arxiv.org/abs/2410.03077
Knowledge distillation (KD) is a technique that compresses large teacher models by training smaller student models to mimic them. The success of KD in auto-regressive language models mainly relies on Reverse KL for mode-seeking and student-generated
Externí odkaz:
http://arxiv.org/abs/2409.12512
Large language models (LLMs) have exhibited remarkable performance in various natural language processing tasks. Techniques like instruction tuning have effectively enhanced the proficiency of LLMs in the downstream task of machine translation. Howev
Externí odkaz:
http://arxiv.org/abs/2406.08434
The efficacy of an large language model (LLM) generated text detector depends substantially on the availability of sizable training data. White-box zero-shot detectors, which require no such data, are nonetheless limited by the accessibility of the s
Externí odkaz:
http://arxiv.org/abs/2405.04286
Autor:
Ma, Xinyu, Liu, Xuebo, Wong, Derek F., Rao, Jun, Li, Bei, Ding, Liang, Chao, Lidia S., Tao, Dacheng, Zhang, Min
Multimodal machine translation (MMT) is a challenging task that seeks to improve translation quality by incorporating visual information. However, recent studies have indicated that the visual information provided by existing MMT datasets is insuffic
Externí odkaz:
http://arxiv.org/abs/2404.18413
Large language models have been widely adopted in natural language processing, yet they face the challenge of generating unreliable content. Recent works aim to reduce misinformation and hallucinations by resorting to attribution as a means to provid
Externí odkaz:
http://arxiv.org/abs/2403.18381