Zobrazeno 1 - 10
of 957
pro vyhledávání: '"LI Guoqi"'
Publikováno v:
Redai dili, Vol 42, Iss 11, Pp 1806-1815 (2022)
Against the background of the increasing trend of fragmentation of freight demand, the spatial structure analysis of urban networks using road Less-Truck-Load (LTL) dedicated lines has positive implications for enriching the flow space theory and emp
Externí odkaz:
https://doaj.org/article/5c44539d30d84a409eb34c0426878172
Autor:
Qiu, Xuerui, Yao, Man, Zhang, Jieyuan, Chou, Yuhong, Qiao, Ning, Zhou, Shibo, Xu, Bo, Li, Guoqi
Spiking Neural Networks (SNNs) provide an energy-efficient way to extract 3D spatio-temporal features. Point clouds are sparse 3D spatial data, which suggests that SNNs should be well-suited for processing them. However, when applying SNNs to point c
Externí odkaz:
http://arxiv.org/abs/2412.07360
Recent advances in spiking neural networks (SNNs) have a predominant focus on network architectures, while relatively little attention has been paid to the underlying neuron model. The point neuron models, a cornerstone of deep SNNs, pose a bottlenec
Externí odkaz:
http://arxiv.org/abs/2412.06355
Autor:
Li, Mingjing, Zhou, Huihui, Xu, Xiaofeng, Zhong, Zhiwei, Quan, Puli, Zhu, Xueke, Lin, Yanyu, Lin, Wenjie, Guo, Hongyu, Zhang, Junchao, Ma, Yunhao, Wang, Wei, Ma, Zhengyu, Li, Guoqi, Cui, Xiaoxin, Tian, Yonghong
There is a growing necessity for edge training to adapt to dynamically changing environment. Neuromorphic computing represents a significant pathway for high-efficiency intelligent computation in energy-constrained edges, but existing neuromorphic ar
Externí odkaz:
http://arxiv.org/abs/2412.05302
Autor:
Yao, Man, Qiu, Xuerui, Hu, Tianxiang, Hu, Jiakui, Chou, Yuhong, Tian, Keyu, Liao, Jianxing, Leng, Luziwei, Xu, Bo, Li, Guoqi
The ambition of brain-inspired Spiking Neural Networks (SNNs) is to become a low-power alternative to traditional Artificial Neural Networks (ANNs). This work addresses two major challenges in realizing this vision: the performance gap between SNNs a
Externí odkaz:
http://arxiv.org/abs/2411.16061
Autor:
Chou, Yuhong, Yao, Man, Wang, Kexin, Pan, Yuqi, Zhu, Ruijie, Zhong, Yiran, Qiao, Yu, Wu, Jibin, Xu, Bo, Li, Guoqi
Various linear complexity models, such as Linear Transformer (LinFormer), State Space Model (SSM), and Linear RNN (LinRNN), have been proposed to replace the conventional softmax attention in Transformer structures. However, the optimal design of the
Externí odkaz:
http://arxiv.org/abs/2411.10741
Autor:
Xie, Shenghao, Zu, Wenqiang, Zhao, Mingyang, Su, Duo, Liu, Shilong, Shi, Ruohua, Li, Guoqi, Zhang, Shanghang, Ma, Lei
Autoregression in large language models (LLMs) has shown impressive scalability by unifying all language tasks into the next token prediction paradigm. Recently, there is a growing interest in extending this success to vision foundation models. In th
Externí odkaz:
http://arxiv.org/abs/2410.22217
Modeling long sequences is crucial for various large-scale models; however, extending existing architectures to handle longer sequences presents significant technical and resource challenges. In this paper, we propose an efficient and flexible attent
Externí odkaz:
http://arxiv.org/abs/2410.04211
The Forward-Forward (FF) algorithm was recently proposed as a local learning method to address the limitations of backpropagation (BP), offering biological plausibility along with memory-efficient and highly parallelized computational benefits. Howev
Externí odkaz:
http://arxiv.org/abs/2408.14925
We introduce AiM, an autoregressive (AR) image generative model based on Mamba architecture. AiM employs Mamba, a novel state-space model characterized by its exceptional performance for long-sequence modeling with linear time complexity, to supplant
Externí odkaz:
http://arxiv.org/abs/2408.12245