Zobrazeno 1 - 10
of 6 857
pro vyhledávání: '"Zhang, Si"'
Autor:
Wang, Limei, Hassani, Kaveh, Zhang, Si, Fu, Dongqi, Yuan, Baichuan, Cong, Weilin, Hua, Zhigang, Wu, Hao, Yao, Ning, Long, Bo
Transformers serve as the backbone architectures of Foundational Models, where a domain-specific tokenizer helps them adapt to various domains. Graph Transformers (GTs) have recently emerged as a leading model in geometric deep learning, outperformin
Externí odkaz:
http://arxiv.org/abs/2410.13798
Autor:
Xu, Zhe, Hassani, Kaveh, Zhang, Si, Zeng, Hanqing, Yasunaga, Michihiro, Wang, Limei, Fu, Dongqi, Yao, Ning, Long, Bo, Tong, Hanghang
Language Models (LMs) are increasingly challenging the dominance of domain-specific models, including Graph Neural Networks (GNNs) and Graph Transformers (GTs), in graph learning tasks. Following this trend, we propose a novel approach that empowers
Externí odkaz:
http://arxiv.org/abs/2410.02296
This paper investigates the coexistence of positive and negative information in the context of information-epidemic dynamics on multiplex networks. In accordance with the tenets of mean field theory, we present not only the analytic solution of the p
Externí odkaz:
http://arxiv.org/abs/2409.15605
Autor:
Li, Gongchu, Chen, Lei, Zhang, Si-Qi, Hong, Xu-Song, Xu, Huaqing, Liu, Yuancheng, Zhou, You, Chen, Geng, Li, Chuan-Feng, Hamma, Alioscia, Guo, Guang-Can
Magic states and magic gates are crucial for achieving universal computation, but some important questions about how magic resources should be implemented to attain quantum advantage have remained unexplored, for instance, in the context of Measureme
Externí odkaz:
http://arxiv.org/abs/2408.01980
Autor:
Li, Gong-Chu, Chen, Lei, Zhang, Si-Qi, Hong, Xu-Song, Zhou, You, Chen, Geng, Li, Chuan-Feng, Guo, Guang-Can
Entanglement plays a fundamental role in quantum physics and information processing. Here, we develop an unbiased estimator for mixed-state entanglement in the few-shot scenario and directly estimate it using random unitary evolution in a photonic sy
Externí odkaz:
http://arxiv.org/abs/2405.20696
Autor:
Yang, Hang, Guo, Jing, Qi, Jianchuan, Xie, Jinliang, Zhang, Si, Yang, Siqi, Li, Nan, Xu, Ming
This paper presents a novel method for parsing and vectorizing semi-structured data to enhance the functionality of Retrieval-Augmented Generation (RAG) within Large Language Models (LLMs). We developed a comprehensive pipeline for converting various
Externí odkaz:
http://arxiv.org/abs/2405.03989
Autor:
Fu, Dongqi, Hua, Zhigang, Xie, Yan, Fang, Jin, Zhang, Si, Sancak, Kaan, Wu, Hao, Malevich, Andrey, He, Jingrui, Long, Bo
Graph transformer has been proven as an effective graph learning method for its adoption of attention mechanism that is capable of capturing expressive representations from complex topological and feature information of graphs. Graph transformer conv
Externí odkaz:
http://arxiv.org/abs/2403.16030
Autor:
Zhang, Si, Fong, Philip W. L.
This paper proposes a computational model for policy administration. As an organization evolves, new users and resources are gradually placed under the mediation of the access control model. Each time such new entities are added, the policy administr
Externí odkaz:
http://arxiv.org/abs/2401.00086
Autor:
Zhang, Si, Fong, Philip W. L.
Protection domains are one of the most enduring concepts in Access Control. Entities with identical access control characteristics are grouped under the same protection domain, and domain-based policies assign access privileges to the protection doma
Externí odkaz:
http://arxiv.org/abs/2312.15596
Autor:
Guo, Jing, Li, Nan, Qi, Jianchuan, Yang, Hang, Li, Ruiqiao, Feng, Yuzhen, Zhang, Si, Xu, Ming
Large language models (LLMs) have achieved impressive linguistic capabilities. However, a key limitation persists in their lack of human-like memory faculties. LLMs exhibit constrained memory retention across sequential interactions, hindering comple
Externí odkaz:
http://arxiv.org/abs/2312.17259