Zobrazeno 1 - 10
of 243
pro vyhledávání: '"Shang, Fanhua"'
Large-scale multimodal models have shown excellent performance over a series of tasks powered by the large corpus of paired multimodal training data. Generally, they are always assumed to receive modality-complete inputs. However, this simple assumpt
Externí odkaz:
http://arxiv.org/abs/2410.06558
Sign language videos are an important medium for spreading and learning sign language. However, most existing human image synthesis methods produce sign language images with details that are distorted, blurred, or structurally incorrect. They also pr
Externí odkaz:
http://arxiv.org/abs/2409.16709
Large Language Models (LLMs) showcase remarkable performance and robust deductive capabilities, yet their expansive size complicates deployment and raises environmental concerns due to substantial resource consumption. The recent development of a qua
Externí odkaz:
http://arxiv.org/abs/2407.15508
Parallel Continual Learning (PCL) tasks investigate the training methods for continual learning with multi-source input, where data from different tasks are learned as they arrive. PCL offers high training efficiency and is well-suited for complex mu
Externí odkaz:
http://arxiv.org/abs/2407.08214
Multimodal MRIs play a crucial role in clinical diagnosis and treatment. Feature disentanglement (FD)-based methods, aiming at learning superior feature representations for multimodal data analysis, have achieved significant success in multimodal lea
Externí odkaz:
http://arxiv.org/abs/2407.04916
Autor:
Shi, Ziqi, Lyu, Fan, Liu, Ye, Shang, Fanhua, Hu, Fuyuan, Feng, Wei, Zhang, Zhang, Wang, Liang
Continual Test-Time Adaptation (CTTA) is an emerging and challenging task where a model trained in a source domain must adapt to continuously changing conditions during testing, without access to the original source data. CTTA is prone to error accum
Externí odkaz:
http://arxiv.org/abs/2405.14602
Autor:
Lyu, Fan, Liu, Daofeng, Zhao, Linglan, Zhang, Zhang, Shang, Fanhua, Hu, Fuyuan, Feng, Wei, Wang, Liang
Online Continual Learning (OCL) empowers machine learning models to acquire new knowledge online across a sequence of tasks. However, OCL faces a significant challenge: catastrophic forgetting, wherein the model learned in previous tasks is substanti
Externí odkaz:
http://arxiv.org/abs/2405.09133
The goal of Continual Learning (CL) is to continuously learn from new data streams and accomplish the corresponding tasks. Previously studied CL assumes that data are given in sequence nose-to-tail for different tasks, thus indeed belonging to Serial
Externí odkaz:
http://arxiv.org/abs/2401.01054
Real-world data is extremely imbalanced and presents a long-tailed distribution, resulting in models that are biased towards classes with sufficient samples and perform poorly on rare classes. Recent methods propose to rebalance classes but they unde
Externí odkaz:
http://arxiv.org/abs/2310.20490
Autor:
Ge, Zhijin, Shang, Fanhua, Liu, Hongying, Liu, Yuanyuan, Wan, Liang, Feng, Wei, Wang, Xiaosen
Deep neural networks are vulnerable to adversarial examples crafted by applying human-imperceptible perturbations on clean inputs. Although many attack methods can achieve high success rates in the white-box setting, they also exhibit weak transferab
Externí odkaz:
http://arxiv.org/abs/2308.10601