Zobrazeno 1 - 10
of 648
pro vyhledávání: '"Wu, Xiao-Ming"'
Language model continual learning (CL) has recently garnered significant interest due to its potential to adapt large language models (LLMs) to dynamic real-world environments without re-training. A key challenge in this field is catastrophic forgett
Externí odkaz:
http://arxiv.org/abs/2408.05200
Recent research has demonstrated the feasibility of training efficient intent detectors based on pre-trained language model~(PLM) with limited labeled data. However, deploying these detectors in resource-constrained environments such as mobile device
Externí odkaz:
http://arxiv.org/abs/2407.09943
Robotic grasping in clutters is a fundamental task in robotic manipulation. In this work, we propose an economic framework for 6-DoF grasp detection, aiming to economize the resource cost in training and meanwhile maintain effective grasp performance
Externí odkaz:
http://arxiv.org/abs/2407.08366
In the real world, multi-modal data often appears in a streaming fashion, and there is a growing demand for similarity retrieval from such non-stationary data, especially at a large scale. In response to this need, online multi-modal hashing has gain
Externí odkaz:
http://arxiv.org/abs/2406.10776
Graph Transformers (GTs) have recently emerged as popular alternatives to traditional message-passing Graph Neural Networks (GNNs), due to their theoretically superior expressiveness and impressive performance reported on standard node classification
Externí odkaz:
http://arxiv.org/abs/2406.08993
Autor:
Wang, Cong, Pan, Jinshan, Wang, Wei, Fu, Gang, Liang, Siyuan, Wang, Mengzhu, Wu, Xiao-Ming, Liu, Jun
This paper proposes UHDformer, a general Transformer for Ultra-High-Definition (UHD) image restoration. UHDformer contains two learning spaces: (a) learning in high-resolution space and (b) learning in low-resolution space. The former learns multi-le
Externí odkaz:
http://arxiv.org/abs/2406.00629
Autor:
Wei, Yi-Lin, Jiang, Jian-Jian, Xing, Chengyi, Tan, Xiantuo, Wu, Xiao-Ming, Li, Hao, Cutkosky, Mark, Zheng, Wei-Shi
This paper explores a novel task ""Dexterous Grasp as You Say"" (DexGYS), enabling robots to perform dexterous grasping based on human commands expressed in natural language. However, the development of this field is hindered by the lack of datasets
Externí odkaz:
http://arxiv.org/abs/2405.19291
We present a novel graph tokenization framework that generates structure-aware, semantic node identifiers (IDs) in the form of a short sequence of discrete codes, serving as symbolic representations of nodes. We employs vector quantization to compres
Externí odkaz:
http://arxiv.org/abs/2405.16435
Autor:
Liu, Qijiong, Dong, Xiaoyu, Xiao, Jiaren, Chen, Nuo, Hu, Hengchang, Zhu, Jieming, Zhu, Chenxu, Sakai, Tetsuya, Wu, Xiao-Ming
Vector quantization, renowned for its unparalleled feature compression capabilities, has been a prominent topic in signal processing and machine learning research for several decades and remains widely utilized today. With the emergence of large mode
Externí odkaz:
http://arxiv.org/abs/2405.03110
In this work, we propose a novel discriminative framework for dexterous grasp generation, named Dexterous Grasp TRansformer (DGTR), capable of predicting a diverse set of feasible grasp poses by processing the object point cloud with only one forward
Externí odkaz:
http://arxiv.org/abs/2404.18135