Zobrazeno 1 - 10
of 12 359
pro vyhledávání: '"Qian,Chen"'
We introduce Quantum Hamiltonian Descent as a novel approach to solve the graph partition problem. By reformulating graph partition as a Quadratic Unconstrained Binary Optimization (QUBO) problem, we leverage QHD's quantum-inspired dynamics to identi
Externí odkaz:
http://arxiv.org/abs/2411.14696
Autor:
Zhang, Haoran, Deng, Junkai, Chen, Xuhui, Hou, Fei, Wang, Wencheng, Qin, Hong, Qian, Chen, He, Ying
Publikováno v:
NeurIPS 2024
Traditional 3D shape reconstruction techniques from multi-view images, such as structure from motion and multi-view stereo, primarily focus on opaque surfaces. Similarly, recent advances in neural radiance fields and its variants also primarily addre
Externí odkaz:
http://arxiv.org/abs/2411.05362
Recent advancements in Multimodal Large Language Models (MLLMs) have greatly improved their abilities in image understanding. However, these models often struggle with grasping pixel-level semantic details, e.g., the keypoints of an object. To bridge
Externí odkaz:
http://arxiv.org/abs/2411.01846
Autor:
Li, Xin, Chu, Qizhi, Chen, Yubin, Liu, Yang, Liu, Yaoqi, Yu, Zekai, Chen, Weize, Qian, Chen, Shi, Chuan, Yang, Cheng
Graphs are widely used for modeling relational data in real-world scenarios, such as social networks and urban computing. Existing LLM-based graph analysis approaches either integrate graph neural networks (GNNs) for specific machine learning tasks,
Externí odkaz:
http://arxiv.org/abs/2410.18032
Autor:
Kong, Jiayi, Zong, Chen, Luo, Jun, Xin, Shiqing, Hou, Fei, Jiang, Hanqing, Qian, Chen, He, Ying
The medial axis, a lower-dimensional shape descriptor, plays an important role in the field of digital geometry processing. Despite its importance, robust computation of the medial axis transform from diverse inputs, especially point clouds with defe
Externí odkaz:
http://arxiv.org/abs/2410.17774
Ensuring awareness of fairness and privacy in Large Language Models (LLMs) is critical. Interestingly, we discover a counter-intuitive trade-off phenomenon that enhancing an LLM's privacy awareness through Supervised Fine-Tuning (SFT) methods signifi
Externí odkaz:
http://arxiv.org/abs/2410.16672
Protecting the intellectual property of open-source Large Language Models (LLMs) is very important, because training LLMs costs extensive computational resources and data. Therefore, model owners and third parties need to identify whether a suspect m
Externí odkaz:
http://arxiv.org/abs/2410.14273
Large Language Model (LLM) based multi-agent systems (MAS) show remarkable potential in collaborative problem-solving, yet they still face critical challenges: low communication efficiency, poor scalability, and a lack of effective parameter-updating
Externí odkaz:
http://arxiv.org/abs/2410.08115
Autor:
Pang, Jinlong, Wei, Jiaheng, Shah, Ankit Parag, Zhu, Zhaowei, Wang, Yaxuan, Qian, Chen, Liu, Yang, Bao, Yujia, Wei, Wei
Instruction tuning is critical for adapting large language models (LLMs) to downstream tasks, and recent studies have demonstrated that small amounts of human-curated data can outperform larger datasets, challenging traditional data scaling laws. Whi
Externí odkaz:
http://arxiv.org/abs/2410.10877
Autor:
Li, Xin, Chen, Weize, Chu, Qizhi, Li, Haopeng, Sun, Zhaojun, Li, Ran, Qian, Chen, Wei, Yiwei, Liu, Zhiyuan, Shi, Chuan, Sun, Maosong, Yang, Cheng
The need to analyze graphs is ubiquitous across various fields, from social networks to biological research and recommendation systems. Therefore, enabling the ability of large language models (LLMs) to process graphs is an important step toward more
Externí odkaz:
http://arxiv.org/abs/2409.19667