Zobrazeno 1 - 10
of 9 003
pro vyhledávání: '"Junyang An"'
Autor:
Zou, Heqing, Luo, Tianze, Xie, Guiyang, Victor, Zhang, Lv, Fengmao, Wang, Guangcong, Chen, Junyang, Wang, Zhuochen, Zhang, Hansheng, Zhang, Huaijian
Multimodal large language models have become a popular topic in deep visual understanding due to many promising real-world applications. However, hour-long video understanding, spanning over one hour and containing tens of thousands of visual frames,
Externí odkaz:
http://arxiv.org/abs/2501.01645
Autor:
Ding, Yao, Kang, Weijie, Yang, Aitao, Zhang, Zhili, Zhao, Junyang, Feng, Jie, Hong, Danfeng, Zheng, Qinhe
Hyperspectral image (HSI) clustering has been a fundamental but challenging task with zero training labels. Currently, some deep graph clustering methods have been successfully explored for HSI due to their outstanding performance in effective spatia
Externí odkaz:
http://arxiv.org/abs/2501.01595
Autor:
Quan, Shanghaoran, Yang, Jiaxi, Yu, Bowen, Zheng, Bo, Liu, Dayiheng, Yang, An, Ren, Xuancheng, Gao, Bofei, Miao, Yibo, Feng, Yunlong, Wang, Zekun, Yang, Jian, Cui, Zeyu, Fan, Yang, Zhang, Yichang, Hui, Binyuan, Lin, Junyang
With the increasing code reasoning capabilities of existing large language models (LLMs) and breakthroughs in reasoning models like OpenAI o1 and o3, there is a growing need to develop more challenging and comprehensive benchmarks that effectively te
Externí odkaz:
http://arxiv.org/abs/2501.01257
Large Language Models (LLMs) can correct their self-generated responses, but a decline in accuracy after self-correction is also witnessed. To have a deeper understanding of self-correction, we endeavor to decompose, evaluate, and analyze the self-co
Externí odkaz:
http://arxiv.org/abs/2412.19513
Autor:
Gou, Junyang, Salberg, Arnt-Børre, Shahvandi, Mostafa Kiani, Tourian, Mohammad J., Meyer, Ulrich, Boergens, Eva, Waldeland, Anders U., Velicogna, Isabella, Dahl, Fredrik, Jäggi, Adrian, Schindler, Konrad, Soja, Benedikt
Accurate uncertainty information associated with essential climate variables (ECVs) is crucial for reliable climate modeling and understanding the spatiotemporal evolution of the Earth system. In recent years, geoscience and climate scientists have b
Externí odkaz:
http://arxiv.org/abs/2412.17506
Autor:
Qwen, Yang, An, Yang, Baosong, Zhang, Beichen, Hui, Binyuan, Zheng, Bo, Yu, Bowen, Li, Chengyuan, Liu, Dayiheng, Huang, Fei, Wei, Haoran, Lin, Huan, Yang, Jian, Tu, Jianhong, Zhang, Jianwei, Yang, Jianxin, Yang, Jiaxi, Zhou, Jingren, Lin, Junyang, Dang, Kai, Lu, Keming, Bao, Keqin, Yang, Kexin, Yu, Le, Li, Mei, Xue, Mingfeng, Zhang, Pei, Zhu, Qin, Men, Rui, Lin, Runji, Li, Tianhao, Tang, Tianyi, Xia, Tingyu, Ren, Xingzhang, Ren, Xuancheng, Fan, Yang, Su, Yang, Zhang, Yichang, Wan, Yu, Liu, Yuqiong, Cui, Zeyu, Zhang, Zhenru, Qiu, Zihan
In this report, we introduce Qwen2.5, a comprehensive series of large language models (LLMs) designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stag
Externí odkaz:
http://arxiv.org/abs/2412.15115
When using agent-task datasets to enhance agent capabilities for Large Language Models (LLMs), current methodologies often treat all tokens within a sample equally. However, we argue that tokens serving different roles - specifically, reasoning token
Externí odkaz:
http://arxiv.org/abs/2412.14780
Mixed Integer Linear Programs (MILPs) are highly flexible and powerful tools for modeling and solving complex real-world combinatorial optimization problems. Recently, machine learning (ML)-guided approaches have demonstrated significant potential in
Externí odkaz:
http://arxiv.org/abs/2412.14409
Mixed-Integer Programming (MIP) is a powerful paradigm for modeling and solving various important combinatorial optimization problems. Recently, learning-based approaches have shown potential to speed up MIP solving via offline training that then gui
Externí odkaz:
http://arxiv.org/abs/2412.14382
The ability to autonomously explore and resolve tasks with minimal human guidance is crucial for the self-development of embodied intelligence. Although reinforcement learning methods can largely ease human effort, it's challenging to design reward f
Externí odkaz:
http://arxiv.org/abs/2412.13492