Zobrazeno 1 - 10
of 2 708
pro vyhledávání: '"Shu, Kai"'
Autor:
Wang, Haoran, Rangapur, Aman, Xu, Xiongxiao, Liang, Yueqing, Gharwi, Haroon, Yang, Carl, Shu, Kai
Existing claim verification datasets often do not require systems to perform complex reasoning or effectively interpret multimodal evidence. To address this, we introduce a new task: multi-hop multimodal claim verification. This task challenges model
Externí odkaz:
http://arxiv.org/abs/2411.09547
Autor:
Chen, Canyu, Yu, Jian, Chen, Shan, Liu, Che, Wan, Zhongwei, Bitterman, Danielle, Wang, Fei, Shu, Kai
Large Language Models (LLMs) hold great promise to revolutionize current clinical systems for their superior capacities on medical text processing tasks and medical licensing exams. Meanwhile, traditional ML models such as SVM and XGBoost have still
Externí odkaz:
http://arxiv.org/abs/2411.06469
Large Language Models (LLMs) suffer from hallucinations, referring to the non-factual information in generated content, despite their superior capacities across tasks. Meanwhile, knowledge editing has been developed as a new popular paradigm to corre
Externí odkaz:
http://arxiv.org/abs/2410.16251
Energy consumption has become a critical design metric and a limiting factor in the development of future computing architectures, from small wearable devices to large-scale leadership computing facilities. The predominant methods in energy managemen
Externí odkaz:
http://arxiv.org/abs/2410.11855
Automatic Medical Imaging Narrative generation aims to alleviate the workload of radiologists by producing accurate clinical descriptions directly from radiological images. However, the subtle visual nuances and domain-specific terminology in medical
Externí odkaz:
http://arxiv.org/abs/2409.03947
Accurate attribution of authorship is crucial for maintaining the integrity of digital content, improving forensic investigations, and mitigating the risks of misinformation and plagiarism. Addressing the imperative need for proper authorship attribu
Externí odkaz:
http://arxiv.org/abs/2408.08946
Model attribution for LLM-generated disinformation poses a significant challenge in understanding its origins and mitigating its spread. This task is especially challenging because modern large language models (LLMs) produce disinformation with human
Externí odkaz:
http://arxiv.org/abs/2407.21264
Autor:
Chen, Canyu, Huang, Baixiang, Li, Zekun, Chen, Zhaorun, Lai, Shiyang, Xu, Xiongxiao, Gu, Jia-Chen, Gu, Jindong, Yao, Huaxiu, Xiao, Chaowei, Yan, Xifeng, Wang, William Yang, Torr, Philip, Song, Dawn, Shu, Kai
Knowledge editing has been increasingly adopted to correct the false or outdated knowledge in Large Language Models (LLMs). Meanwhile, one critical but under-explored question is: can knowledge editing be used to inject harm into LLMs? In this paper,
Externí odkaz:
http://arxiv.org/abs/2407.20224
With the emergence of large language models (LLMs) and their ability to perform a variety of tasks, their application in recommender systems (RecSys) has shown promise. However, we are facing significant challenges when deploying LLMs into RecSys, su
Externí odkaz:
http://arxiv.org/abs/2406.14043
Autor:
Yang, Qin, Mohammad, Meisam, Wang, Han, Payani, Ali, Kundu, Ashish, Shu, Kai, Yan, Yan, Hong, Yuan
Differentially Private Stochastic Gradient Descent (DP-SGD) and its variants have been proposed to ensure rigorous privacy for fine-tuning large-scale pre-trained language models. However, they rely heavily on the Gaussian mechanism, which may overly
Externí odkaz:
http://arxiv.org/abs/2405.18776