Zobrazeno 1 - 10
of 853
pro vyhledávání: '"Wang Hanbin"'
Autor:
Zuo, Yuxin, Jiang, Wenxuan, Liu, Wenxuan, Li, Zixuan, Bai, Long, Wang, Hanbin, Zeng, Yutao, Jin, Xiaolong, Guo, Jiafeng, Cheng, Xueqi
Empirical evidence suggests that LLMs exhibit spontaneous cross-lingual alignment. Our findings suggest that although LLMs also demonstrate promising cross-lingual alignment in Information Extraction, there remains significant imbalance across langua
Externí odkaz:
http://arxiv.org/abs/2411.04794
Autor:
Li, Xinze, Wang, Hanbin, Liu, Zhenghao, Yu, Shi, Wang, Shuo, Yan, Yukun, Fu, Yukai, Gu, Yu, Yu, Ge
Pretrained language models have shown strong effectiveness in code-related tasks, such as code retrieval, code generation, code summarization, and code completion tasks. In this paper, we propose COde assistaNt viA retrieval-augmeNted language model
Externí odkaz:
http://arxiv.org/abs/2410.16229
Autor:
Yang, Weiqing, Wang, Hanbin, Liu, Zhenghao, Li, Xinze, Yan, Yukun, Wang, Shuo, Gu, Yu, Yu, Minghe, Liu, Zhiyuan, Yu, Ge
Debugging is a vital aspect of software development, yet the debugging capabilities of Large Language Models (LLMs) remain largely unexplored. This paper first introduces DEBUGEVAL, a comprehensive benchmark designed to evaluate the debugging capabil
Externí odkaz:
http://arxiv.org/abs/2408.05006
Autor:
Yuan, Lifan, Cui, Ganqu, Wang, Hanbin, Ding, Ning, Wang, Xingyao, Deng, Jia, Shan, Boji, Chen, Huimin, Xie, Ruobing, Lin, Yankai, Liu, Zhenghao, Zhou, Bowen, Peng, Hao, Liu, Zhiyuan, Sun, Maosong
We introduce Eurus, a suite of large language models (LLMs) optimized for reasoning. Finetuned from Mistral-7B and CodeLlama-70B, Eurus models achieve state-of-the-art results among open-source models on a diverse set of benchmarks covering mathemati
Externí odkaz:
http://arxiv.org/abs/2404.02078
This paper introduces INTERVENOR (INTERactiVE chaiN Of Repair), a system designed to emulate the interactive code repair processes observed in humans, encompassing both code diagnosis and code repair. INTERVENOR prompts Large Language Models (LLMs) t
Externí odkaz:
http://arxiv.org/abs/2311.09868
In this paper, we present a novel second-order generalised rotational discrete gradient scheme for numerically approximating the orthonormal frame gradient flow of biaxial nematic liquid crystals. This scheme relies on reformulating the original grad
Externí odkaz:
http://arxiv.org/abs/2310.10524
The large language model (LLM) based agents have demonstrated their capacity to automate and expedite software development processes. In this paper, we focus on game development and propose a multi-agent collaborative framework, dubbed GameGPT, to au
Externí odkaz:
http://arxiv.org/abs/2310.08067
Publikováno v:
In International Journal of Biological Macromolecules December 2024 283 Part 4
Autor:
Li, Jingying, Xu, Kui, Yao, Jia, Yang, Yiyuan, Wu, Ziang, Zhang, Jieqiong, Chen, Xu, Zheng, Junjie, Yang, Yin, Liu, Xingtai, Wang, Xiaofang, Gan, Yi, Hu, Wei, Lv, Lin, Ma, Guokun, Tao, Li, Wang, Hanbin, Zhang, Jun, Wang, Hao, Wan, Houzhao
Publikováno v:
In Energy Storage Materials November 2024 73
Autor:
Yang, Weihao, Liu, Qing, Wang, Hanbin, Chen, Yiqin, Yang, Run, Xia, Shuang, Luo, Yi, Deng, Longjiang, Qin, Jun, Duan, Huigao, Bi, Lei
Metamaterials with artificial optical properties have attracted significant research interest. In particular, artificial magnetic resonances in non-unity permeability tensor at optical frequencies in metamaterials have been reported. However, only no
Externí odkaz:
http://arxiv.org/abs/2110.05698