Zobrazeno 1 - 10
of 734
pro vyhledávání: '"Yang, YiBo"'
Autor:
Han, Xue-Ying, Hua, Jun, Ji, Xiangdong, Lü, Cai-Dian, Schäfer, Andreas, Su, Yushan, Wang, Wei, Xu, Ji, Yang, Yibo, Zhang, Jian-Hui, Zhang, Qi-An, Zhao, Shuai
We develop an approach for calculating heavy quark effective theory (HQET) light-cone distribution amplitudes (LCDAs) by employing a sequential effective theory methodology. The theoretical foundation of the framework is established, elucidating how
Externí odkaz:
http://arxiv.org/abs/2410.18654
Few-shot class-incremental learning (FSCIL) confronts the challenge of integrating new classes into a model with minimal training samples while preserving the knowledge of previously learned classes. Traditional methods widely adopt static adaptation
Externí odkaz:
http://arxiv.org/abs/2407.06136
Autor:
Yang, Yibo, Li, Xiaojie, Alfarra, Motasem, Hammoud, Hasan, Bibi, Adel, Torr, Philip, Ghanem, Bernard
Relieving the reliance of neural network training on a global back-propagation (BP) has emerged as a notable research topic due to the biological implausibility and huge memory consumption caused by BP. Among the existing solutions, local learning op
Externí odkaz:
http://arxiv.org/abs/2406.05222
Autor:
Yang, Yibo, Li, Xiaojie, Zhou, Zhongzhu, Song, Shuaiwen Leon, Wu, Jianlong, Nie, Liqiang, Ghanem, Bernard
Current parameter-efficient fine-tuning (PEFT) methods build adapters widely agnostic of the context of downstream task to learn, or the context of important knowledge to maintain. As a result, there is often a performance gap compared to full-parame
Externí odkaz:
http://arxiv.org/abs/2406.05223
Self-supervised learning has achieved remarkable success in acquiring high-quality representations from unlabeled data. The widely adopted contrastive learning framework aims to learn invariant representations by minimizing the distance between posit
Externí odkaz:
http://arxiv.org/abs/2403.12003
In the theory of lossy compression, the rate-distortion (R-D) function $R(D)$ describes how much a data source can be compressed (in bit-rate) at any given level of fidelity (distortion). Obtaining $R(D)$ for a given data source establishes the funda
Externí odkaz:
http://arxiv.org/abs/2310.18908
Autor:
Gao, Peifeng, Xu, Qianqian, Yang, Yibo, Wen, Peisong, Shao, Huiyang, Yang, Zhiyong, Ghanem, Bernard, Huang, Qingming
Neural Collapse (NC) is a well-known phenomenon of deep neural networks in the terminal phase of training (TPT). It is characterized by the collapse of features and classifier into a symmetrical structure, known as simplex equiangular tight frame (ET
Externí odkaz:
http://arxiv.org/abs/2310.08358
Autor:
Bie, Fengxiang, Yang, Yibo, Zhou, Zhongzhu, Ghanem, Adam, Zhang, Minjia, Yao, Zhewei, Wu, Xiaoxia, Holmes, Connor, Golnari, Pareesa, Clifton, David A., He, Yuxiong, Tao, Dacheng, Song, Shuaiwen Leon
Text-to-image generation (TTI) refers to the usage of models that could process text input and generate high fidelity images based on text descriptions. Text-to-image generation using neural networks could be traced back to the emergence of Generativ
Externí odkaz:
http://arxiv.org/abs/2309.00810