Zobrazeno 1 - 10
of 292
pro vyhledávání: '"Wang Minghan"'
Autor:
Iqbal, Hasan, Wang, Yuxia, Wang, Minghan, Georgiev, Georgi, Geng, Jiahui, Gurevych, Iryna, Nakov, Preslav
The increased use of large language models (LLMs) across a variety of real-world applications calls for automatic tools to check the factual accuracy of their outputs, as LLMs often hallucinate. This is difficult as it requires assessing the factuali
Externí odkaz:
http://arxiv.org/abs/2408.11832
Empathy plays a pivotal role in fostering prosocial behavior, often triggered by the sharing of personal experiences through narratives. However, modeling empathy using NLP approaches remains challenging due to its deep interconnection with human int
Externí odkaz:
http://arxiv.org/abs/2406.11250
Recent advancements in multimodal large language models (MLLMs) have made significant progress in integrating information across various modalities, yet real-world applications in educational and scientific domains remain challenging. This paper intr
Externí odkaz:
http://arxiv.org/abs/2406.10880
The increased use of large language models (LLMs) across a variety of real-world applications calls for mechanisms to verify the factual accuracy of their outputs. Difficulties lie in assessing the factuality of free-form responses in open domains. A
Externí odkaz:
http://arxiv.org/abs/2405.05583
Autor:
Lin, Lizhi, Mu, Honglin, Zhai, Zenan, Wang, Minghan, Wang, Yuxia, Wang, Renxi, Gao, Junjie, Zhang, Yixuan, Che, Wanxiang, Baldwin, Timothy, Han, Xudong, Li, Haonan
Generative models are rapidly gaining popularity and being integrated into everyday applications, raising concerns over their safety issues as various vulnerabilities are exposed. Faced with the problem, the field of red teaming is experiencing fast-
Externí odkaz:
http://arxiv.org/abs/2404.00629
Simultaneous machine translation (SimulMT) presents a challenging trade-off between translation quality and latency. Recent studies have shown that LLMs can achieve good performance in SimulMT tasks. However, this often comes at the expense of high i
Externí odkaz:
http://arxiv.org/abs/2402.10552
Autor:
Wang, Yuxia, Wang, Minghan, Manzoor, Muhammad Arslan, Liu, Fei, Georgiev, Georgi, Das, Rocktim Jyoti, Nakov, Preslav
Large language models (LLMs), especially when instruction-tuned for chat, have become part of our daily lives, freeing people from the process of searching, extracting, and integrating information from multiple sources by offering a straightforward a
Externí odkaz:
http://arxiv.org/abs/2402.02420
Autor:
Guo, Jiaxin, Wang, Minghan, Qiao, Xiaosong, Wei, Daimeng, Shang, Hengchao, Li, Zongyao, Yu, Zhengzhe, Li, Yinglu, Su, Chang, Zhang, Min, Tao, Shimin, Yang, Hao
Error correction techniques have been used to refine the output sentences from automatic speech recognition (ASR) models and achieve a lower word error rate (WER). Previous works usually adopt end-to-end models and has strong dependency on Pseudo Pai
Externí odkaz:
http://arxiv.org/abs/2401.05689
Recent years have seen the rise of large language models (LLMs), where practitioners use task-specific prompts; this was shown to be effective for a variety of tasks. However, when applied to semantic textual similarity (STS) and natural language inf
Externí odkaz:
http://arxiv.org/abs/2309.08969
Autor:
Wang, Minghan, Zhao, Jinming, Vu, Thuy-Trang, Shiri, Fatemeh, Shareghi, Ehsan, Haffari, Gholamreza
Real-world simultaneous machine translation (SimulMT) systems face more challenges than just the quality-latency trade-off. They also need to address issues related to robustness with noisy input, processing long contexts, and flexibility for knowled
Externí odkaz:
http://arxiv.org/abs/2309.06706