Zobrazeno 1 - 10
of 419
pro vyhledávání: '"Cao, Yixin"'
Autor:
Zhou, Chijin, Zhang, Shuyang, Dai, Xueliang, Cao, Yixin, Yuan, Ye, Xia, Chengjie, Zeng, Zhikun, Wang, Yujie
Using high-resolution x-ray tomography, we experimentally investigate the bridge structures in tapped granular packings composed of particles with varying friction coefficients. We find that gravity can induce subtle structural changes on the load-be
Externí odkaz:
http://arxiv.org/abs/2409.18093
This study presents a novel evaluation framework for the Vision-Language Navigation (VLN) task. It aims to diagnose current models for various instruction categories at a finer-grained level. The framework is structured around the context-free gramma
Externí odkaz:
http://arxiv.org/abs/2409.17313
Autor:
Jiang, Jin, Yan, Yuchen, Liu, Yang, Jin, Yonggang, Peng, Shuai, Zhang, Mengdi, Cai, Xunliang, Cao, Yixin, Gao, Liangcai, Tang, Zhi
In this paper, we present a novel approach, called LogicPro, to enhance Large Language Models (LLMs) complex Logical reasoning through Program Examples. We do this effectively by simply utilizing widely available algorithmic problems and their code s
Externí odkaz:
http://arxiv.org/abs/2409.12929
Autor:
Cao, Yixin, Krawczyk, Tomasz
We identify all minimal chordal graphs that are not circular-arc graphs, thereby resolving one of ``the main open problems'' concerning the structures of circular-arc graphs as posed by Dur{\'{a}}n, Grippo, and Safe in 2011. The problem had been atte
Externí odkaz:
http://arxiv.org/abs/2409.02733
Autor:
Yan, Yuchen, Jiang, Jin, Liu, Yang, Cao, Yixin, Xu, Xin, zhang, Mengdi, Cai, Xunliang, Shao, Jian
Self-correction is a novel method that can stimulate the potential reasoning abilities of large language models (LLMs). It involves detecting and correcting errors during the inference process when LLMs solve reasoning problems. However, recent works
Externí odkaz:
http://arxiv.org/abs/2409.01524
Large Language Models (LLMs) are versatile and demonstrate impressive generalization ability by mining and learning information from extensive unlabeled text. However, they still exhibit reasoning mistakes, often stemming from knowledge deficiencies,
Externí odkaz:
http://arxiv.org/abs/2408.11431
Autor:
Cao, Yixin, Krawczyk, Tomasz
McConnell [FOCS 2001] presented a flipping transformation from circular-arc graphs to interval graphs with certain patterns of representations. Beyond its algorithmic implications, this transformation is instrumental in identifying all minimal graphs
Externí odkaz:
http://arxiv.org/abs/2408.10892
A graph is well-(edge-)dominated if every minimal (edge) dominating set is minimum. A graph is equimatchable if every maximal matching is maximum. We study these concepts on strong product graphs. We fully characterize well-edge-dominated and equimat
Externí odkaz:
http://arxiv.org/abs/2407.01121
Autor:
Ma, Yubo, Zang, Yuhang, Chen, Liangyu, Chen, Meiqi, Jiao, Yizhu, Li, Xinze, Lu, Xinyuan, Liu, Ziyu, Ma, Yan, Dong, Xiaoyi, Zhang, Pan, Pan, Liangming, Jiang, Yu-Gang, Wang, Jiaqi, Cao, Yixin, Sun, Aixin
Understanding documents with rich layouts and multi-modal components is a long-standing and practical task. Recent Large Vision-Language Models (LVLMs) have made remarkable strides in various tasks, particularly in single-page document understanding
Externí odkaz:
http://arxiv.org/abs/2407.01523
Autor:
Ying, Jiahao, Lin, Mingbao, Cao, Yixin, Tang, Wei, Wang, Bo, Sun, Qianru, Huang, Xuanjing, Yan, Shuicheng
This paper introduces the innovative "LLMs-as-Instructors" framework, which leverages the advanced Large Language Models (LLMs) to autonomously enhance the training of smaller target models. Inspired by the theory of "Learning from Errors", this fram
Externí odkaz:
http://arxiv.org/abs/2407.00497