Zobrazeno 1 - 10
of 10 865
pro vyhledávání: '"Gerstein A"'
Autor:
Zhao, Haochen, Tang, Xiangru, Yang, Ziran, Han, Xiao, Feng, Xuanzhi, Fan, Yueqing, Cheng, Senhao, Jin, Di, Zhao, Yilun, Cohan, Arman, Gerstein, Mark
The advancement and extensive application of large language models (LLMs) have been remarkable, including their use in scientific research assistance. However, these models often generate scientifically incorrect or unsafe responses, and in some case
Externí odkaz:
http://arxiv.org/abs/2411.16736
Autor:
Kerkmann, David, Korf, Sascha, Nguyen, Khoa, Abele, Daniel, Schengen, Alain, Gerstein, Carlotta, Göbbert, Jens Henrik, Basermann, Achim, Kühn, Martin J., Meyer-Hermann, Michael
Agent-based models have proven to be useful tools in supporting decision-making processes in different application domains. The advent of modern computers and supercomputers has enabled these bottom-up approaches to realistically model human mobility
Externí odkaz:
http://arxiv.org/abs/2410.08050
Autor:
Tang, Xiangru, Zhang, Xingyao, Shao, Yanjun, Wu, Jie, Zhao, Yilun, Cohan, Arman, Gong, Ming, Zhang, Dongmei, Gerstein, Mark
Large language models (LLM) excel at a variety of natural language processing tasks, yet they struggle to generate personalized content for individuals, particularly in real-world scenarios like scientific writing. Addressing this challenge, we intro
Externí odkaz:
http://arxiv.org/abs/2406.14275
Autor:
Deng, Chunyuan, Tang, Xiangru, Zhao, Yilun, Wang, Hanming, Wang, Haoran, Zhou, Wangchunshu, Cohan, Arman, Gerstein, Mark
Recently, large language models (LLMs) have evolved into interactive agents, proficient in planning, tool use, and task execution across a wide variety of tasks. However, without specific agent tuning, open-source models like LLaMA currently struggle
Externí odkaz:
http://arxiv.org/abs/2404.04285
A major challenge in near-term quantum computing is its application to large real-world datasets due to scarce quantum hardware resources. One approach to enabling tractable quantum models for such datasets involves compressing the original data to m
Externí odkaz:
http://arxiv.org/abs/2402.17749
Autor:
Tang, Xiangru, Dai, Howard, Knight, Elizabeth, Wu, Fang, Li, Yunyang, Li, Tianxiao, Gerstein, Mark
Artificial intelligence (AI)-driven methods can vastly improve the historically costly drug design process, with various generative models already in widespread use. Generative models for de novo drug design, in particular, focus on the creation of n
Externí odkaz:
http://arxiv.org/abs/2402.08703
Autor:
Fang, Yin, Liu, Kangwei, Zhang, Ningyu, Deng, Xinle, Yang, Penghui, Chen, Zhuo, Tang, Xiangru, Gerstein, Mark, Fan, Xiaohui, Chen, Huajun
As Large Language Models (LLMs) rapidly evolve, their influence in science is becoming increasingly prominent. The emerging capabilities of LLMs in task generalization and free-form dialogue can significantly advance fields like chemistry and biology
Externí odkaz:
http://arxiv.org/abs/2402.08303
Autor:
Tang, Xiangru, Jin, Qiao, Zhu, Kunlun, Yuan, Tongxin, Zhang, Yichi, Zhou, Wangchunshu, Qu, Meng, Zhao, Yilun, Tang, Jian, Zhang, Zhuosheng, Cohan, Arman, Lu, Zhiyong, Gerstein, Mark
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines. While their capabilities are promising, these
Externí odkaz:
http://arxiv.org/abs/2402.04247
Autor:
Zhang, Zhuosheng, Yao, Yao, Zhang, Aston, Tang, Xiangru, Ma, Xinbei, He, Zhiwei, Wang, Yiming, Gerstein, Mark, Wang, Rui, Liu, Gongshen, Zhao, Hai
Large language models (LLMs) have dramatically enhanced the field of language intelligence, as demonstrably evidenced by their formidable empirical performance across a spectrum of complex reasoning tasks. Additionally, theoretical proofs have illumi
Externí odkaz:
http://arxiv.org/abs/2311.11797
Autor:
Tang, Xiangru, Liu, Yuliang, Cai, Zefan, Shao, Yanjun, Lu, Junjie, Zhang, Yichi, Deng, Zexuan, Hu, Helan, An, Kaikai, Huang, Ruijun, Si, Shuzheng, Chen, Sheng, Zhao, Haozhe, Chen, Liang, Wang, Yan, Liu, Tianyu, Jiang, Zhiwei, Chang, Baobao, Fang, Yin, Qin, Yujia, Zhou, Wangchunshu, Zhao, Yilun, Cohan, Arman, Gerstein, Mark
Despite Large Language Models (LLMs) like GPT-4 achieving impressive results in function-level code generation, they struggle with repository-scale code understanding (e.g., coming up with the right arguments for calling routines), requiring a deeper
Externí odkaz:
http://arxiv.org/abs/2311.09835