Zobrazeno 1 - 10
of 7 593
pro vyhledávání: '"ZHANG Ge"'
Autor:
Qureshi, Claudio
Publikováno v:
In European Journal of Combinatorics January 2020 83
Autor:
Qureshi, Claudio
The Golomb-Welch conjecture (1968) states that there are no $e$-perfect Lee codes in $\mathbb{Z}^n$ for $n\geq 3$ and $e\geq 2$. This conjecture remains open even for linear codes. A recent result of Zhang and Ge establishes the non-existence of line
Externí odkaz:
http://arxiv.org/abs/1805.10409
Autor:
Hu, Yaochen, Zeng, Mai, Zhang, Ge, Rumiantsev, Pavel, Ma, Liheng, Zhang, Yingxue, Coates, Mark
Graph Neural Networks (GNN) exhibit superior performance in graph representation learning, but their inference cost can be high, due to an aggregation operation that can require a memory fetch for a very large number of nodes. This inference cost is
Externí odkaz:
http://arxiv.org/abs/2410.19723
Autor:
Zhang, Chenhao, Feng, Xi, Bai, Yuelin, Du, Xinrun, Hou, Jinchang, Deng, Kaixin, Han, Guangzeng, Li, Qinrui, Wang, Bingli, Liu, Jiaheng, Qu, Xingwei, Zhang, Yifei, Zhao, Qixuan, Liang, Yiming, Liu, Ziqiang, Fang, Feiteng, Yang, Min, Huang, Wenhao, Lin, Chenghua, Zhang, Ge, Ni, Shiwen
As the capabilities of Multimodal Large Language Models (MLLMs) continue to improve, the need for higher-order capability evaluation of MLLMs is increasing. However, there is a lack of work evaluating MLLM for higher-order perception and understandin
Externí odkaz:
http://arxiv.org/abs/2410.13854
Autor:
Wu, Siwei, Peng, Zhongyuan, Du, Xinrun, Zheng, Tuney, Liu, Minghao, Wu, Jialong, Ma, Jiachen, Li, Yizhi, Yang, Jian, Zhou, Wangchunshu, Lin, Qunshu, Zhao, Junbo, Zhang, Zhaoxiang, Huang, Wenhao, Zhang, Ge, Lin, Chenghua, Liu, J. H.
Enabling Large Language Models (LLMs) to handle a wider range of complex tasks (e.g., coding, math) has drawn great attention from many researchers. As LLMs continue to evolve, merely increasing the number of model parameters yields diminishing perfo
Externí odkaz:
http://arxiv.org/abs/2410.13639
Autor:
Wu, Shangda, Wang, Yashan, Yuan, Ruibin, Guo, Zhancheng, Tan, Xu, Zhang, Ge, Zhou, Monan, Chen, Jing, Mu, Xuefeng, Gao, Yuejie, Dong, Yuanliang, Liu, Jiafeng, Li, Xiaobing, Yu, Feng, Sun, Maosong
Challenges in managing linguistic diversity and integrating various musical modalities are faced by current music information retrieval systems. These limitations reduce their effectiveness in a global, multimodal music environment. To address these
Externí odkaz:
http://arxiv.org/abs/2410.13267
Autor:
Wang, Pei, Wu, Yanan, Wang, Zekun, Liu, Jiaheng, Song, Xiaoshuai, Peng, Zhongyuan, Deng, Ken, Zhang, Chenchen, Wang, Jiakai, Peng, Junran, Zhang, Ge, Guo, Hangyu, Zhang, Zhaoxiang, Su, Wenbo, Zheng, Bo
Large Language Models (LLMs) have displayed massive improvements in reasoning and decision-making skills and can hold natural conversations with users. Recently, many tool-use benchmark datasets have been proposed. However, existing datasets have the
Externí odkaz:
http://arxiv.org/abs/2410.11710
Autor:
Gao, Bofei, Song, Feifan, Yang, Zhe, Cai, Zefan, Miao, Yibo, Dong, Qingxiu, Li, Lei, Ma, Chenghao, Chen, Liang, Xu, Runxin, Tang, Zhengyang, Wang, Benyou, Zan, Daoguang, Quan, Shanghaoran, Zhang, Ge, Sha, Lei, Zhang, Yichang, Ren, Xuancheng, Liu, Tianyu, Chang, Baobao
Recent advancements in large language models (LLMs) have led to significant breakthroughs in mathematical reasoning capabilities. However, existing benchmarks like GSM8K or MATH are now being solved with high accuracy (e.g., OpenAI o1 achieves 94.8%
Externí odkaz:
http://arxiv.org/abs/2410.07985
As multimodal large language models (MLLMs) continue to demonstrate increasingly competitive performance across a broad spectrum of tasks, more intricate and comprehensive benchmarks have been developed to assess these cutting-edge models. These benc
Externí odkaz:
http://arxiv.org/abs/2410.06555