Zobrazeno 1 - 10
of 197
pro vyhledávání: '"Cao, Bowen"'
Autor:
Cai, Deng, Li, Huayang, Fu, Tingchen, Li, Siheng, Xu, Weiwen, Li, Shuaiyi, Cao, Bowen, Zhang, Zhisong, Huang, Xinting, Cui, Leyang, Wang, Yan, Liu, Lemao, Watanabe, Taro, Shi, Shuming
Despite the general capabilities of pre-trained large language models (LLMs), they still need further adaptation to better serve practical applications. In this paper, we demonstrate the interchangeability of three popular and distinct adaptation too
Externí odkaz:
http://arxiv.org/abs/2406.16377
The performance of large language models (LLMs) is acutely sensitive to the phrasing of prompts, which raises significant concerns about their reliability in real-world scenarios. Existing studies often divide prompts into task-level instructions and
Externí odkaz:
http://arxiv.org/abs/2406.10248
Multi-modal Large Language Models (MLLMs) demonstrate remarkable success across various vision-language tasks. However, they suffer from visual hallucination, where the generated responses diverge from the provided image. Are MLLMs oblivious to the a
Externí odkaz:
http://arxiv.org/abs/2403.14401
Standard language models generate text by selecting tokens from a fixed, finite, and standalone vocabulary. We introduce a novel method that selects context-aware phrases from a collection of supporting documents. One of the most significant challeng
Externí odkaz:
http://arxiv.org/abs/2402.17532
Spoken language understanding (SLU) is a fundamental task in the task-oriented dialogue systems. However, the inevitable errors from automatic speech recognition (ASR) usually impair the understanding performance and lead to error propagation. Althou
Externí odkaz:
http://arxiv.org/abs/2311.11375
The goal of speech enhancement (SE) is to eliminate the background interference from the noisy speech signal. Generative models such as diffusion models (DM) have been applied to the task of SE because of better generalization in unseen noisy scenes.
Externí odkaz:
http://arxiv.org/abs/2309.01212
Autor:
Chen, Nuo, Shou, Linjun, Gong, Ming, Pei, Jian, Cao, Bowen, Chang, Jianhui, Jiang, Daxin, Li, Jia
Publikováno v:
ACL 2023
Currently, learning better unsupervised sentence representations is the pursuit of many natural language processing communities. Lots of approaches based on pre-trained language models (PLMs) and contrastive learning have achieved promising results o
Externí odkaz:
http://arxiv.org/abs/2305.06154
Learning representations for graph-structured data is essential for graph analytical tasks. While remarkable progress has been made on static graphs, researches on temporal graphs are still in its beginning stage. The bottleneck of the temporal graph
Externí odkaz:
http://arxiv.org/abs/2302.11814
Knowledge-aware question answering (KAQA) requires the model to answer questions over a knowledge base, which is essential for both open-domain QA and domain-specific QA, especially when language models alone cannot provide all the knowledge needed.
Externí odkaz:
http://arxiv.org/abs/2302.11799
Publikováno v:
In Thermochimica Acta September 2024 739