Zobrazeno 1 - 10
of 2 084
pro vyhledávání: '"Hu, ZhiQiang"'
Autor:
Cheng, Zesen, Zhang, Hang, Li, Kehan, Leng, Sicong, Hu, Zhiqiang, Wu, Fei, Zhao, Deli, Li, Xin, Bing, Lidong
Contrastive loss is a powerful approach for representation learning, where larger batch sizes enhance performance by providing more negative samples to better distinguish between similar and dissimilar data. However, scaling batch sizes is constraine
Externí odkaz:
http://arxiv.org/abs/2410.17243
Autor:
Bin, Yi, Shi, Wenhao, Ding, Yujuan, Hu, Zhiqiang, Wang, Zheng, Yang, Yang, Ng, See-Kiong, Shen, Heng Tao
Artwork analysis is important and fundamental skill for art appreciation, which could enrich personal aesthetic sensibility and facilitate the critical thinking ability. Understanding artworks is challenging due to its subjective nature, diverse inte
Externí odkaz:
http://arxiv.org/abs/2408.00491
SeaLLMs 3: Open Foundation and Chat Multilingual Large Language Models for Southeast Asian Languages
Autor:
Zhang, Wenxuan, Chan, Hou Pong, Zhao, Yiran, Aljunied, Mahani, Wang, Jianyu, Liu, Chaoqun, Deng, Yue, Hu, Zhiqiang, Xu, Weiwen, Chia, Yew Ken, Li, Xin, Bing, Lidong
Large Language Models (LLMs) have shown remarkable abilities across various tasks, yet their development has predominantly centered on high-resource languages like English and Chinese, leaving low-resource languages underserved. To address this dispa
Externí odkaz:
http://arxiv.org/abs/2407.19672
Large Language Models (LLMs) have demonstrated remarkable proficiency in a wide range of NLP tasks. However, when it comes to authorship verification (AV) tasks, which involve determining whether two given texts share the same authorship, even advanc
Externí odkaz:
http://arxiv.org/abs/2407.12882
Autor:
Yu, Jiawei, Jia, Guihao, Li, Qian, Wang, Yuyang, Xiao, Kebin, Ju, Yongkang, Zhang, Hongyun, Hu, Zhiqiang, Guo, Yunkai, Lian, Biao, Tang, Peizhe, Zhou, Shuyun, Xue, Qi-Kun, Li, Wei
In twisted bilayer graphene (TBG) devices, local strains often coexist and entangle with the twist-angle dependent moir\'e superlattice, both of which can significantly affect the electronic properties of TBG. Here, using low-temperature scanning tun
Externí odkaz:
http://arxiv.org/abs/2406.20040
Autor:
Shi, Wenhao, Hu, Zhiqiang, Bin, Yi, Liu, Junhua, Yang, Yang, Ng, See-Kiong, Bing, Lidong, Lee, Roy Ka-Wei
Large language models (LLMs) have demonstrated impressive reasoning capabilities, particularly in textual mathematical problem-solving. However, existing open-source image instruction fine-tuning datasets, containing limited question-answer pairs per
Externí odkaz:
http://arxiv.org/abs/2406.17294
Autor:
Yuan, Ye, Zhang, Chen, Li, Fan, Chen, Jian, Fu, Yanning, Bai, Chunhai, Gao, Xing, Wang, Yong, Zhong, Tuhong, Gao, Yixing, Wang, Liang, Chen, Donghua, Zhang, Yixing, Zhang, Yang, Xie, Wenpeng, Zhang, Shupi, Liu, Ding, Cao, Jun, Yin, Xiangdong, Mo, Xiaojun, Liu, Jing, Han, Xinru, Liu, Tong, Chen, Yuqiang, Gao, Zhendong, Zeng, Xiang, Niu, Guihua, Zheng, Xiansheng, Lin, Yuchen, Ye, Peiyu, Liang, Weitang, Zhu, Chengcheng, Hu, Zhiqiang, He, Jianguo, Zhang, Wei, Chen, Yue, Cheng, Zhuo, Sun, Tianrui, Guo, Chenyang, Lu, Yue, Lin, Jiajun, Tan, Wei, Zhou, Jia, Xu, Jun, He, Jun, Ye, Jiahui, Li, Delai, Zhang, Shuai, Qu, Qingyue
Publikováno v:
A&A 684, L13 (2024)
The atmosphere of Triton was probed directly by observing a ground-based stellar occultation on 6 October 2022. This rare event yielded 23 positive light curves collected from 13 separate observation stations contributing to our campaign. The signifi
Externí odkaz:
http://arxiv.org/abs/2403.09464
Autor:
Wang, Lei, Xu, Wanyu, Hu, Zhiqiang, Lan, Yihuai, Dong, Shan, Wang, Hao, Lee, Roy Ka-Wei, Lim, Ee-Peng
This paper introduces a new in-context learning (ICL) mechanism called In-Image Learning (I$^2$L) that combines demonstration examples, visual cues, and chain-of-thought reasoning into an aggregated image to enhance the capabilities of Large Multimod
Externí odkaz:
http://arxiv.org/abs/2402.17971
Autor:
Nguyen, Xuan-Phi, Zhang, Wenxuan, Li, Xin, Aljunied, Mahani, Hu, Zhiqiang, Shen, Chenhui, Chia, Yew Ken, Li, Xingxuan, Wang, Jianyu, Tan, Qingyu, Cheng, Liying, Chen, Guanzheng, Deng, Yue, Yang, Sen, Liu, Chaoqun, Zhang, Hang, Bing, Lidong
Despite the remarkable achievements of large language models (LLMs) in various tasks, there remains a linguistic bias that favors high-resource languages, such as English, often at the expense of low-resource and regional languages. To address this i
Externí odkaz:
http://arxiv.org/abs/2312.00738
Autor:
Lan, Yihuai, Hu, Zhiqiang, Wang, Lei, Wang, Yang, Ye, Deheng, Zhao, Peilin, Lim, Ee-Peng, Xiong, Hui, Wang, Hao
This paper explores the open research problem of understanding the social behaviors of LLM-based agents. Using Avalon as a testbed, we employ system prompts to guide LLM agents in gameplay. While previous studies have touched on gameplay with LLM age
Externí odkaz:
http://arxiv.org/abs/2310.14985