Zobrazeno 1 - 10
of 38
pro vyhledávání: '"Diao, Boyu"'
Although the diffusion model has achieved remarkable performance in the field of image generation, its high inference delay hinders its wide application in edge devices with scarce computing resources. Therefore, many training-free sampling methods h
Externí odkaz:
http://arxiv.org/abs/2410.07679
Continual learning (CL) is designed to learn new tasks while preserving existing knowledge. Replaying samples from earlier tasks has proven to be an effective method to mitigate the forgetting of previously acquired knowledge. However, the current re
Externí odkaz:
http://arxiv.org/abs/2410.06645
Existing Incremental Object Detection (IOD) methods partially alleviate catastrophic forgetting when incrementally detecting new objects in real-world scenarios. However, many of these methods rely on the assumption that unlabeled old-class objects m
Externí odkaz:
http://arxiv.org/abs/2406.04829
Continual Learning methods are designed to learn new tasks without erasing previous knowledge. However, Continual Learning often requires massive computational power and storage capacity for satisfactory performance. In this paper, we propose a resou
Externí odkaz:
http://arxiv.org/abs/2309.16117
Autor:
Yang, Chuanguang, An, Zhulin, Huang, Libo, Bi, Junyu, Yu, Xinqiang, Yang, Han, Diao, Boyu, Xu, Yongjun
Contrastive Language-Image Pre-training (CLIP) has become a promising language-supervised visual pre-training framework. This paper aims to distill small CLIP models supervised by a large teacher CLIP model. We propose several distillation strategies
Externí odkaz:
http://arxiv.org/abs/2307.12732
Class-Incremental Learning (CIL) aims to solve the neural networks' catastrophic forgetting problem, which refers to the fact that once the network updates on a new task, its performance on previously-learned tasks drops dramatically. Most successful
Externí odkaz:
http://arxiv.org/abs/2304.10103
Distributed training is an effective way to accelerate the training process of large-scale deep learning models. However, the parameter exchange and synchronization of distributed stochastic gradient descent introduce a large amount of communication
Externí odkaz:
http://arxiv.org/abs/2108.06004
The underwater acoustic channel is one of the most challenging communication channels. Due to periodical tidal and daily climatic variation, underwater noise is periodically fluctuating, which result in the periodical changing of acoustic channel qua
Externí odkaz:
http://arxiv.org/abs/2108.05057
Publikováno v:
In Journal of Parallel and Distributed Computing March 2024 185
Publikováno v:
2022IJCNN
Deep learning has achieved impressive results in many areas, but the deployment of edge intelligent devices is still very slow. To solve this problem, we propose a novel compression and acceleration method based on data distribution characteristics f
Externí odkaz:
http://arxiv.org/abs/2006.12963