Zobrazeno 1 - 10
of 3 231
pro vyhledávání: '"WANG, YIWEI"'
Autor:
Bi, Baolong, Huang, Shaohan, Wang, Yiwei, Yang, Tianchi, Zhang, Zihan, Huang, Haizhen, Mei, Lingrui, Fang, Junfeng, Li, Zehao, Wei, Furu, Deng, Weiwei, Sun, Feng, Zhang, Qi, Liu, Shenghua
Reliable responses from large language models (LLMs) require adherence to user instructions and retrieved information. While alignment techniques help LLMs align with human intentions and values, improving context-faithfulness through alignment remai
Externí odkaz:
http://arxiv.org/abs/2412.15280
Question answering represents a core capability of large language models (LLMs). However, when individuals encounter unfamiliar knowledge in texts, they often formulate questions that the text itself cannot answer due to insufficient understanding of
Externí odkaz:
http://arxiv.org/abs/2411.17993
Extracting governing physical laws from computational or experimental data is crucial across various fields such as fluid dynamics and plasma physics. Many of those physical laws are dissipative due to fluid viscosity or plasma collisions. For such a
Externí odkaz:
http://arxiv.org/abs/2412.04480
This paper presents a data-driven electrical machine design (EMD) framework using wound-rotor synchronous generator (WRSG) as a design example. Unlike traditional preliminary EMD processes that heavily rely on expertise, this framework leverages an a
Externí odkaz:
http://arxiv.org/abs/2411.11221
Autor:
Li, Zhecheng, Wang, Yiwei, Hooi, Bryan, Cai, Yujun, Cheung, Naifan, Peng, Nanyun, Chang, Kai-wei
Cross-lingual summarization (CLS) aims to generate a summary for the source text in a different target language. Currently, instruction-tuned large language models (LLMs) excel at various English tasks. However, unlike languages such as English, Chin
Externí odkaz:
http://arxiv.org/abs/2410.20021
Autor:
Li, Zhecheng, Wang, Yiwei, Hooi, Bryan, Cai, Yujun, Xiong, Zhen, Peng, Nanyun, Chang, Kai-wei
Text classification involves categorizing a given text, such as determining its sentiment or identifying harmful content. With the advancement of large language models (LLMs), these models have become highly effective at performing text classificatio
Externí odkaz:
http://arxiv.org/abs/2410.20016
Autor:
Luo, Yihong, Chen, Yuhan, Qiu, Siya, Wang, Yiwei, Zhang, Chen, Zhou, Yan, Cao, Xiaochun, Tang, Jing
Graph Neural Networks (GNNs) have shown superior performance in node classification. However, GNNs perform poorly in the Few-Shot Node Classification (FSNC) task that requires robust generalization to make accurate predictions for unseen classes with
Externí odkaz:
http://arxiv.org/abs/2410.16845
This paper investigates controllable generation for large language models (LLMs) with prompt-based control, focusing on Lexically Constrained Generation (LCG). We systematically evaluate the performance of LLMs on satisfying lexical constraints with
Externí odkaz:
http://arxiv.org/abs/2410.04628
As Large Language Models (LLMs) grow increasingly powerful, ensuring their safety and alignment with human values remains a critical challenge. Ideally, LLMs should provide informative responses while avoiding the disclosure of harmful or sensitive i
Externí odkaz:
http://arxiv.org/abs/2410.02684
Surgical procedures are inherently complex and dynamic, with intricate dependencies and various execution paths. Accurate identification of the intentions behind critical actions, referred to as Primary Intentions (PIs), is crucial to understanding a
Externí odkaz:
http://arxiv.org/abs/2409.19579