Zobrazeno 1 - 10
of 252
pro vyhledávání: '"Lin, Zihao"'
Autor:
Beigi, Mohammad, Wang, Sijia, Shen, Ying, Lin, Zihao, Kulkarni, Adithya, He, Jianfeng, Chen, Feng, Jin, Ming, Cho, Jin-Hee, Zhou, Dawei, Lu, Chang-Tien, Huang, Lifu
In recent years, Large Language Models (LLMs) have become fundamental to a broad spectrum of artificial intelligence applications. As the use of LLMs expands, precisely estimating the uncertainty in their predictions has become crucial. Current metho
Externí odkaz:
http://arxiv.org/abs/2410.20199
We present a brief report (at the Nufact-2024 conference) summarizing a global extraction of the ${\rm ^{12}C}$ longitudinal (${\cal R}_L$) and transverse (${\cal R}_T$) nuclear electromagnetic response functions from an analysis of all available ele
Externí odkaz:
http://arxiv.org/abs/2410.15991
Autor:
Bodek, Arie, Christy, M. E., Lin, Zihao, Bulugean, Giulia-Maria, Delgado, Amii Matamoros, Ankowski, Artur M., Vidal, Julia Tena
We have performed a global extraction of the ${\rm ^{12}C}$ longitudinal (${\cal R}_L$) and transverse (${\cal R}_T$) nuclear electromagnetic response functions from an analysis of all available electron scattering data on carbon. The response functi
Externí odkaz:
http://arxiv.org/abs/2409.10637
Autor:
Li, Binxu, Yan, Tiankai, Pan, Yuanting, Luo, Jie, Ji, Ruiyang, Ding, Jiayuan, Xu, Zhe, Liu, Shilong, Dong, Haoyu, Lin, Zihao, Wang, Yixin
Multi-Modal Large Language Models (MLLMs), despite being successful, exhibit limited generality and often fall short when compared to specialized models. Recently, LLM-based agents have been developed to address these challenges by selecting appropri
Externí odkaz:
http://arxiv.org/abs/2407.02483
Autor:
Zhang, Yuxiang, Chen, Jing, Wang, Junjie, Liu, Yaxin, Yang, Cheng, Shi, Chufan, Zhu, Xinyu, Lin, Zihao, Wan, Hanwen, Yang, Yujiu, Sakai, Tetsuya, Feng, Tian, Yamana, Hayato
Tool-augmented large language models (LLMs) are rapidly being integrated into real-world applications. Due to the lack of benchmarks, the community has yet to fully understand the hallucination issues within these models. To address this challenge, w
Externí odkaz:
http://arxiv.org/abs/2406.20015
Autor:
Liu, Minqian, Xu, Zhiyang, Lin, Zihao, Ashby, Trevor, Rimchala, Joy, Zhang, Jiaxin, Huang, Lifu
Interleaved text-and-image generation has been an intriguing research direction, where the models are required to generate both images and text pieces in an arbitrary order. Despite the emerging advancements in interleaved generation, the progress in
Externí odkaz:
http://arxiv.org/abs/2406.14643
Autor:
Beigi, Mohammad, Shen, Ying, Yang, Runing, Lin, Zihao, Wang, Qifan, Mohan, Ankith, He, Jianfeng, Jin, Ming, Lu, Chang-Tien, Huang, Lifu
Despite their vast capabilities, Large Language Models (LLMs) often struggle with generating reliable outputs, frequently producing high-confidence inaccuracies known as hallucinations. Addressing this challenge, our research introduces InternalInspe
Externí odkaz:
http://arxiv.org/abs/2406.12053
Autor:
Lin, Zihao, Beigi, Mohammad, Li, Hongxuan, Zhou, Yufan, Zhang, Yuxiang, Wang, Qifan, Yin, Wenpeng, Huang, Lifu
Memory Editing (ME) has emerged as an efficient method to modify erroneous facts or inject new facts into Large Language Models (LLMs). Two mainstream ME methods exist: parameter-modifying ME and parameter-preserving ME (integrating extra modules whi
Externí odkaz:
http://arxiv.org/abs/2402.11122
With the blowout development of pre-trained models (PTMs), the efficient tuning of these models for diverse downstream applications has emerged as a pivotal research concern. Although recent investigations into prompt tuning have provided promising a
Externí odkaz:
http://arxiv.org/abs/2310.03123
Knowledge Graph (KG) plays a crucial role in Medical Report Generation (MRG) because it reveals the relations among diseases and thus can be utilized to guide the generation process. However, constructing a comprehensive KG is labor-intensive and its
Externí odkaz:
http://arxiv.org/abs/2307.12526