Zobrazeno 1 - 10
of 959
pro vyhledávání: '"Wang, Zifeng"'
Autor:
Wang, Zilong, Wang, Zifeng, Le, Long, Zheng, Huaixiu Steven, Mishra, Swaroop, Perot, Vincent, Zhang, Yuwei, Mattapalli, Anush, Taly, Ankur, Shang, Jingbo, Lee, Chen-Yu, Pfister, Tomas
Retrieval augmented generation (RAG) combines the generative abilities of large language models (LLMs) with external knowledge sources to provide more accurate and up-to-date responses. Recent RAG advancements focus on improving retrieval outcomes th
Externí odkaz:
http://arxiv.org/abs/2407.08223
Autor:
Wang, Zifeng, Wang, Xiangyu, Xu, Shenghang, Zhou, Renwu, Zhang, Mingyan, Li, Wanchun, Zhang, Zizhu, Wang, Luge, Chen, Jinkun, Zhang, Jishen, Guo, Li, Pei, Dandan, Liu, Dingxin, Rong, Mingzhe
Efficient sterilization of pathogens with cleaner methods is a critical concern for environmental disinfection and clinical anti-infective treatment. Plasma-activated water (PAW) is a promising alternative to chemical disinfectants and antibiotics fo
Externí odkaz:
http://arxiv.org/abs/2407.01035
Clinical trials are fundamental in developing new drugs, medical devices, and treatments. However, they are often time-consuming and have low success rates. Although there have been initial attempts to create large language models (LLMs) for clinical
Externí odkaz:
http://arxiv.org/abs/2407.11007
Automatic medical discovery by AI is a dream of many. One step toward that goal is to create an AI model to understand clinical studies and synthesize clinical evidence from the literature. Clinical evidence synthesis currently relies on systematic r
Externí odkaz:
http://arxiv.org/abs/2406.17755
Autor:
Hsieh, Cheng-Yu, Chuang, Yung-Sung, Li, Chun-Liang, Wang, Zifeng, Le, Long T., Kumar, Abhishek, Glass, James, Ratner, Alexander, Lee, Chen-Yu, Krishna, Ranjay, Pfister, Tomas
Large language models (LLMs), even when specifically trained to process long input contexts, struggle to capture relevant information located in the middle of their input. This phenomenon has been known as the lost-in-the-middle problem. In this work
Externí odkaz:
http://arxiv.org/abs/2406.16008
Autor:
Hsu, I-Hung, Wang, Zifeng, Le, Long T., Miculicich, Lesly, Peng, Nanyun, Lee, Chen-Yu, Pfister, Tomas
Grounded generation aims to equip language models (LMs) with the ability to produce more credible and accountable responses by accurately citing verifiable sources. However, existing methods, by either feeding LMs with raw or preprocessed materials,
Externí odkaz:
http://arxiv.org/abs/2406.05365
Autor:
Shi, Haizhou, Xu, Zihao, Wang, Hengyi, Qin, Weiyi, Wang, Wenyuan, Wang, Yibin, Wang, Zifeng, Ebrahimi, Sayna, Wang, Hao
The recent success of large language models (LLMs) trained on static, pre-collected, general datasets has sparked numerous research directions and applications. One such direction addresses the non-trivial challenge of integrating pre-trained LLMs in
Externí odkaz:
http://arxiv.org/abs/2404.16789
Autor:
Wang, Zifeng, Li, Chun-Liang, Perot, Vincent, Le, Long T., Miao, Jin, Zhang, Zizhao, Lee, Chen-Yu, Pfister, Tomas
Instruction tuning has emerged as the key in aligning large language models (LLMs) with specific task instructions, thereby mitigating the discrepancy between the next-token prediction objective and users' actual goals. To reduce the labor and time c
Externí odkaz:
http://arxiv.org/abs/2404.05875
The performance of deep models, including Vision Transformers, is known to be vulnerable to adversarial attacks. Many existing defenses against these attacks, such as adversarial training, rely on full-model fine-tuning to induce robustness in the mo
Externí odkaz:
http://arxiv.org/abs/2403.13196
The advent of large language models (LLMs) has significantly advanced natural language processing tasks like text summarization. However, their large size and computational demands, coupled with privacy concerns in data transmission, limit their use
Externí odkaz:
http://arxiv.org/abs/2403.10351