Zobrazeno 11 - 20
of 52 030
pro vyhledávání: '"Zhang, Qi"'
Autor:
Jiang, Yu-Xiao, Shao, Sen, Xia, Wei, Denner, M. Michael, Ingham, Julian, Hossain, Md Shafayat, Qiu, Qingzheng, Zheng, Xiquan, Chen, Hongyu, Cheng, Zi-Jia, Yang, Xian P., Kim, Byunghoon, Yin, Jia-Xin, Zhang, Songbo, Litskevich, Maksim, Zhang, Qi, Cochran, Tyler A., Peng, Yingying, Chang, Guoqing, Guo, Yanfeng, Thomale, Ronny, Neupert, Titus, Hasan, M. Zahid
Novel states of matter arise in quantum materials due to strong interactions among electrons. A nematic phase breaks the point group symmetry of the crystal lattice and is known to emerge in correlated materials. Here we report the observation of an
Externí odkaz:
http://arxiv.org/abs/2406.13702
This paper focuses on extending the success of large language models (LLMs) to sequential decision making. Existing efforts either (i) re-train or finetune LLMs for decision making, or (ii) design prompts for pretrained LLMs. The former approach suff
Externí odkaz:
http://arxiv.org/abs/2406.12125
Autor:
Yang, Yuming, Zhao, Wantong, Huang, Caishuang, Ye, Junjie, Wang, Xiao, Zheng, Huiyuan, Nan, Yang, Wang, Yuran, Xu, Xueying, Huang, Kaixin, Zhang, Yunke, Gui, Tao, Zhang, Qi, Huang, Xuanjing
Open Named Entity Recognition (NER), which involves identifying arbitrary types of entities from arbitrary domains, remains challenging for Large Language Models (LLMs). Recent studies suggest that fine-tuning LLMs on extensive NER data can boost the
Externí odkaz:
http://arxiv.org/abs/2406.11192
Autor:
Bao, Rong, Zheng, Rui, Dou, Shihan, Wang, Xiao, Zhou, Enyu, Wang, Bo, Zhang, Qi, Ding, Liang, Tao, Dacheng
In aligning large language models (LLMs), utilizing feedback from existing advanced AI rather than humans is an important method to scale supervisory signals. However, it is highly challenging for AI to understand human intentions and societal values
Externí odkaz:
http://arxiv.org/abs/2406.11190
Autor:
Zheng, Rui, Guo, Hongyi, Liu, Zhihan, Zhang, Xiaoying, Yao, Yuanshun, Xu, Xiaojun, Wang, Zhaoran, Xi, Zhiheng, Gui, Tao, Zhang, Qi, Huang, Xuanjing, Li, Hang, Liu, Yang
The standard Reinforcement Learning from Human Feedback (RLHF) framework primarily focuses on optimizing the performance of large language models using pre-collected prompts. However, collecting prompts that provide comprehensive coverage is both ted
Externí odkaz:
http://arxiv.org/abs/2406.10977
Autor:
Wang, Meng, Lin, Tian, Lin, Aidi, Yu, Kai, Peng, Yuanyuan, Wang, Lianyu, Chen, Cheng, Zou, Ke, Liang, Huiyu, Chen, Man, Yao, Xue, Zhang, Meiqin, Huang, Binwei, Zheng, Chaoxin, Zhang, Peixin, Chen, Wei, Luo, Yilong, Chen, Yifan, Xia, Honghe, Shi, Tingkun, Zhang, Qi, Guo, Jinming, Chen, Xiaolin, Wang, Jingcheng, Tham, Yih Chung, Liu, Dianbo, Wong, Wendy, Thakur, Sahil, Fenner, Beau, Fang, Danqi, Liu, Siying, Liu, Qingyun, Huang, Yuqiang, Zeng, Hongqiang, Meng, Yanda, Zhou, Yukun, Jiang, Zehua, Qiu, Minghui, Zhang, Changqing, Chen, Xinjian, Wang, Sophia Y, Lee, Cecilia S, Sobrin, Lucia, Cheung, Carol Y, Pang, Chi Pui, Keane, Pearse A, Cheng, Ching-Yu, Chen, Haoyu, Fu, Huazhu
Previous foundation models for retinal images were pre-trained with limited disease categories and knowledge base. Here we introduce RetiZero, a vision-language foundation model that leverages knowledge from over 400 fundus diseases. To RetiZero's pr
Externí odkaz:
http://arxiv.org/abs/2406.09317
Mixed-integer optimization is at the core of many online decision-making systems that demand frequent updates of decisions in real time. However, due to their combinatorial nature, mixed-integer linear programs (MILPs) can be difficult to solve, rend
Externí odkaz:
http://arxiv.org/abs/2406.05697
Stochastic programming provides a natural framework for modeling sequential optimization problems under uncertainty; however, the efficient solution of large-scale multistage stochastic programs remains a challenge, especially in the presence of disc
Externí odkaz:
http://arxiv.org/abs/2406.05052
As instruction-tuned large language models (LLMs) evolve, aligning pretrained foundation models presents increasing challenges. Existing alignment strategies, which typically leverage diverse and high-quality data sources, often overlook the intrinsi
Externí odkaz:
http://arxiv.org/abs/2406.04854
Autor:
Xi, Zhiheng, Ding, Yiwen, Chen, Wenxiang, Hong, Boyang, Guo, Honglin, Wang, Junzhe, Yang, Dingwen, Liao, Chenyang, Guo, Xin, He, Wei, Gao, Songyang, Chen, Lu, Zheng, Rui, Zou, Yicheng, Gui, Tao, Zhang, Qi, Qiu, Xipeng, Huang, Xuanjing, Wu, Zuxuan, Jiang, Yu-Gang
Building generalist agents that can handle diverse tasks and evolve themselves across different environments is a long-term goal in the AI community. Large language models (LLMs) are considered a promising foundation to build such agents due to their
Externí odkaz:
http://arxiv.org/abs/2406.04151