Zobrazeno 1 - 10
of 2 487
pro vyhledávání: '"LI Junyi"'
Autor:
Li, Junyi, Huang, Heng
Bilevel Optimization has experienced significant advancements recently with the introduction of new efficient algorithms. Mirroring the success in single-level optimization, stochastic gradient-based algorithms are widely used in bilevel optimization
Externí odkaz:
http://arxiv.org/abs/2411.05868
We conducted a series of pore-scale numerical simulations on convective flow in porous media, with a fixed Schmidt number of 400 and a wide range of Rayleigh numbers. The porous media are modeled using regularly arranged square obstacles in a Rayleig
Externí odkaz:
http://arxiv.org/abs/2409.19652
We present PartGLEE, a part-level foundation model for locating and identifying both objects and parts in images. Through a unified framework, PartGLEE accomplishes detection, segmentation, and grounding of instances at any granularity in the open wo
Externí odkaz:
http://arxiv.org/abs/2407.16696
Adapting general large language models (LLMs) to specialized domains presents great challenges due to varied data distributions. This adaptation typically requires continual pre-training on massive domain-specific corpora to facilitate knowledge memo
Externí odkaz:
http://arxiv.org/abs/2407.10804
Drug-target relationships may now be predicted computationally using bioinformatics data, which is a valuable tool for understanding pharmacological effects, enhancing drug development efficiency, and advancing related research. A number of structure
Externí odkaz:
http://arxiv.org/abs/2407.10055
Autor:
Tang, Tianyi, Hu, Yiwen, Li, Bingqian, Luo, Wenyang, Qin, Zijing, Sun, Haoxiang, Wang, Jiapeng, Xu, Shiyi, Cheng, Xiaoxue, Guo, Geyang, Peng, Han, Zheng, Bowen, Tang, Yiru, Min, Yingqian, Chen, Yushuo, Chen, Jie, Zhao, Yuanqian, Ding, Luran, Wang, Yuhao, Dong, Zican, Xia, Chunxuan, Li, Junyi, Zhou, Kun, Zhao, Wayne Xin, Wen, Ji-Rong
To facilitate the research on large language models (LLMs), this paper presents a comprehensive and unified library, LLMBox, to ease the development, use, and evaluation of LLMs. This library is featured with three main merits: (1) a unified data int
Externí odkaz:
http://arxiv.org/abs/2407.05563
Recent work has explored the capability of large language models (LLMs) to identify and correct errors in LLM-generated responses. These refinement approaches frequently evaluate what sizes of models are able to do refinement for what problems, but l
Externí odkaz:
http://arxiv.org/abs/2407.02397
Recent work on evaluating the diversity of text generated by LLMs has focused on word-level features. Here we offer an analysis of syntactic features to characterize general repetition in models, beyond frequent n-grams. Specifically, we define synta
Externí odkaz:
http://arxiv.org/abs/2407.00211
Autor:
Zhu, Yutao, Zhou, Kun, Mao, Kelong, Chen, Wentong, Sun, Yiding, Chen, Zhipeng, Cao, Qian, Wu, Yihan, Chen, Yushuo, Wang, Feng, Zhang, Lei, Li, Junyi, Wang, Xiaolei, Wang, Lei, Zhang, Beichen, Dong, Zican, Cheng, Xiaoxue, Chen, Yuhan, Tang, Xinyu, Hou, Yupeng, Ren, Qiangqiang, Pang, Xincheng, Xie, Shufang, Zhao, Wayne Xin, Dou, Zhicheng, Mao, Jiaxin, Lin, Yankai, Song, Ruihua, Xu, Jun, Chen, Xu, Yan, Rui, Wei, Zhewei, Hu, Di, Huang, Wenbing, Gao, Ze-Feng, Chen, Yueguo, Lu, Weizheng, Wen, Ji-Rong
Large language models (LLMs) have become the foundation of many applications, leveraging their extensive capabilities in processing and understanding natural language. While many open-source LLMs have been released with technical reports, the lack of
Externí odkaz:
http://arxiv.org/abs/2406.19853
The variations between in-group and out-group speech (intergroup bias) are subtle and could underlie many social phenomena like stereotype perpetuation and implicit bias. In this paper, we model the intergroup bias as a tagging task on English sports
Externí odkaz:
http://arxiv.org/abs/2406.17947