Zobrazeno 1 - 10
of 2 171
pro vyhledávání: '"HU, Xia"'
Autor:
Wu, Yifan, Yang, Yuntao, Liu, Zirui, Li, Zhao, Pahwa, Khushbu, Li, Rongbin, Zheng, Wenjin, Hu, Xia, Xu, Zhaozhuo
Gene-gene interactions play a crucial role in the manifestation of complex human diseases. Uncovering significant gene-gene interactions is a challenging task. Here, we present an innovative approach utilizing data-driven computational tools, leverag
Externí odkaz:
http://arxiv.org/abs/2410.15616
Autor:
Jiang, Zhimeng, Liu, Zirui, Han, Xiaotian, Feng, Qizhang, Jin, Hongye, Tan, Qiaoyu, Zhou, Kaixiong, Zou, Na, Hu, Xia
Deep neural networks are ubiquitously adopted in many applications, such as computer vision, natural language processing, and graph analytics. However, well-trained neural networks can make prediction errors after deployment as the world changes. \te
Externí odkaz:
http://arxiv.org/abs/2410.15556
Autor:
Chuang, Yu-Neng, Zhou, Helen, Sarma, Prathusha Kameswara, Gopalan, Parikshit, Boccio, John, Bolouki, Sara, Hu, Xia
Large language models (LLMs) have demonstrated impressive performance on several tasks and are increasingly deployed in real-world applications. However, especially in high-stakes settings, it becomes vital to know when the output of an LLM may be un
Externí odkaz:
http://arxiv.org/abs/2410.13284
Autor:
Wang, Guanchu, Chuang, Yu-Neng, Tang, Ruixiang, Zhong, Shaochen, Yuan, Jiayi, Jin, Hongye, Liu, Zirui, Chaudhary, Vipin, Xu, Shuai, Caverlee, James, Hu, Xia
Ensuring the security of released large language models (LLMs) poses a significant dilemma, as existing mechanisms either compromise ownership rights or raise data privacy concerns. To address this dilemma, we introduce TaylorMLP to protect the owner
Externí odkaz:
http://arxiv.org/abs/2410.05331
Autor:
Gilson, Aidan, Ai, Xuguang, Arunachalam, Thilaka, Chen, Ziyou, Cheong, Ki Xiong, Dave, Amisha, Duic, Cameron, Kibe, Mercy, Kaminaka, Annette, Prasad, Minali, Siddig, Fares, Singer, Maxwell, Wong, Wendy, Jin, Qiao, Keenan, Tiarnan D. L., Hu, Xia, Chew, Emily Y., Lu, Zhiyong, Xu, Hua, Adelman, Ron A., Tham, Yih-Chung, Chen, Qingyu
Despite the potential of Large Language Models (LLMs) in medicine, they may generate responses lacking supporting evidence or based on hallucinated evidence. While Retrieval Augment Generation (RAG) is popular to address this issue, few studies imple
Externí odkaz:
http://arxiv.org/abs/2409.13902
Autor:
Wang, Yicheng, Yuan, Jiayi, Chuang, Yu-Neng, Wang, Zhuoer, Liu, Yingchi, Cusick, Mark, Kulkarni, Param, Ji, Zhengping, Ibrahim, Yasser, Hu, Xia
Large Language Models (LLMs) are increasingly serving as evaluators in Natural Language Generation (NLG) tasks. However, the capabilities of LLMs in scoring NLG quality remain inadequately explored. Current studies depend on human assessments and sim
Externí odkaz:
http://arxiv.org/abs/2408.13704
Autor:
Wang, Guanchu, Ran, Junhao, Tang, Ruixiang, Chang, Chia-Yuan, Chuang, Yu-Neng, Liu, Zirui, Braverman, Vladimir, Liu, Zhandong, Hu, Xia
Despite the impressive capabilities of Large Language Models (LLMs) in general medical domains, questions remain about their performance in diagnosing rare diseases. To answer this question, we aim to assess the diagnostic performance of LLMs in rare
Externí odkaz:
http://arxiv.org/abs/2408.08422
Autor:
Yuan, Jiayi, Liu, Hongyi, Zhong, Shaochen, Chuang, Yu-Neng, Li, Songchen, Wang, Guanchu, Le, Duy, Jin, Hongye, Chaudhary, Vipin, Xu, Zhaozhuo, Liu, Zirui, Hu, Xia
Long context capability is a crucial competency for large language models (LLMs) as it mitigates the human struggle to digest long-form texts. This capability enables complex task-solving scenarios such as book summarization, code assistance, and man
Externí odkaz:
http://arxiv.org/abs/2407.01527
Autor:
Chuang, Yu-Neng, Li, Songchen, Yuan, Jiayi, Wang, Guanchu, Lai, Kwei-Herng, Yu, Leisheng, Ding, Sirui, Chang, Chia-Yuan, Tan, Qiaoyu, Zha, Daochen, Hu, Xia
Inspired by Large Language Models (LLMs), Time Series Forecasting (TSF), a long-standing task in time series analysis, is undergoing a transition towards Large Time Series Models (LTSMs), aiming to train universal transformer-based models for TSF. Ho
Externí odkaz:
http://arxiv.org/abs/2406.14045
In the field of crisis/disaster informatics, social media is increasingly being used for improving situational awareness to inform response and relief efforts. Efficient and accurate text classification tools have been a focal area of investigation i
Externí odkaz:
http://arxiv.org/abs/2406.15477