Zobrazeno 1 - 10
of 416
pro vyhledávání: '"Liu Tianyang"'
Large Language Models (LLMs) are reported to hold undesirable attestation bias on inference tasks: when asked to predict if a premise P entails a hypothesis H, instead of considering H's conditional truthfulness entailed by P, LLMs tend to use the ou
Externí odkaz:
http://arxiv.org/abs/2408.14467
Autor:
Liu, Tianyang, Zhang, Jiajun, Shi, Yuan, Gu, Junhua, Guo, Quan, Xu, Yidong, Deng, Furen, Wu, Fengquan, Cong, Yanping, Chen, Xuelei
The cosmic 21 cm signal serves as a crucial probe for studying the evolutionary history of the Universe. However, detecting the 21 cm signal poses significant challenges due to its extremely faint nature. To mitigate the interference from the Earth's
Externí odkaz:
http://arxiv.org/abs/2406.17000
Autor:
Hao, Shibo, Gu, Yi, Luo, Haotian, Liu, Tianyang, Shao, Xiyan, Wang, Xinyuan, Xie, Shuhua, Ma, Haodi, Samavedhi, Adithya, Gao, Qiyue, Wang, Zhen, Hu, Zhiting
Generating accurate step-by-step reasoning is essential for Large Language Models (LLMs) to address complex problems and enhance robustness and interpretability. Despite the flux of research on developing advanced reasoning approaches, systematically
Externí odkaz:
http://arxiv.org/abs/2404.05221
Autor:
Zhao, Tianhao, Chen, Yongcan, Wu, Yu, Liu, Tianyang, Du, Bo, Xiao, Peilun, Qiu, Shi, Yang, Hongda, Li, Guozhen, Yang, Yi, Lin, Yutian
Semantic segmentation in bird's eye view (BEV) plays a crucial role in autonomous driving. Previous methods usually follow an end-to-end pipeline, directly predicting the BEV segmentation map from monocular RGB inputs. However, the challenge arises w
Externí odkaz:
http://arxiv.org/abs/2404.01925
Autor:
Lozhkov, Anton, Li, Raymond, Allal, Loubna Ben, Cassano, Federico, Lamy-Poirier, Joel, Tazi, Nouamane, Tang, Ao, Pykhtar, Dmytro, Liu, Jiawei, Wei, Yuxiang, Liu, Tianyang, Tian, Max, Kocetkov, Denis, Zucker, Arthur, Belkada, Younes, Wang, Zijian, Liu, Qian, Abulkhanov, Dmitry, Paul, Indraneil, Li, Zhuang, Li, Wen-Ding, Risdal, Megan, Li, Jia, Zhu, Jian, Zhuo, Terry Yue, Zheltonozhskii, Evgenii, Dade, Nii Osae Osae, Yu, Wenhao, Krauß, Lucas, Jain, Naman, Su, Yixuan, He, Xuanli, Dey, Manan, Abati, Edoardo, Chai, Yekun, Muennighoff, Niklas, Tang, Xiangru, Oblokulov, Muhtasham, Akiki, Christopher, Marone, Marc, Mou, Chenghao, Mishra, Mayank, Gu, Alex, Hui, Binyuan, Dao, Tri, Zebaze, Armel, Dehaene, Olivier, Patry, Nicolas, Xu, Canwen, McAuley, Julian, Hu, Han, Scholak, Torsten, Paquet, Sebastien, Robinson, Jennifer, Anderson, Carolyn Jane, Chapados, Nicolas, Patwary, Mostofa, Tajbakhsh, Nima, Jernite, Yacine, Ferrandis, Carlos Muñoz, Zhang, Lingming, Hughes, Sean, Wolf, Thomas, Guha, Arjun, von Werra, Leandro, de Vries, Harm
The BigCode project, an open-scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder2. In partnership with Software Heritage (SWH), we build The Stack v2 on top of the digita
Externí odkaz:
http://arxiv.org/abs/2402.19173
Large Language Models (LLMs) have shown to be capable of various tasks, yet their capability in interpreting and reasoning over tabular data remains an underexplored area. In this context, this study investigates from three core perspectives: the rob
Externí odkaz:
http://arxiv.org/abs/2312.16702
The increasing availability of image-text pairs has largely fueled the rapid advancement in vision-language foundation models. However, the vast scale of these datasets inevitably introduces significant variability in data quality, which can adversel
Externí odkaz:
http://arxiv.org/abs/2312.06726
Detecting the cosmic 21 cm signal from Epoch of Reionization (EoR) has always been a difficult task. Although the Galactic foreground can be regarded as a smooth power-law spectrum, due to the chromaticity of the antenna, additional structure will be
Externí odkaz:
http://arxiv.org/abs/2311.10951
Large Language Models (LLMs) have greatly advanced code auto-completion systems, with a potential for substantial productivity enhancements for developers. However, current benchmarks mainly focus on single-file tasks, leaving an assessment gap for m
Externí odkaz:
http://arxiv.org/abs/2306.03091