Zobrazeno 1 - 10
of 179
pro vyhledávání: '"Lin, Shaobo"'
Publikováno v:
CVPR2023 Workshop on Generative Models for Computer Vision
Few-shot object detection (FSOD) aims to expand an object detector for novel categories given only a few instances for training. The few training samples restrict the performance of FSOD model. Recent text-to-image generation models have shown promis
Externí odkaz:
http://arxiv.org/abs/2303.13221
Publikováno v:
CVPR2023 Workshop on Learning with Limited Labelled Data
Few-shot object detection (FSOD) aims to expand an object detector for novel categories given only a few instances for training. However, detecting novel categories with only a few samples usually leads to the problem of misclassification. In FSOD, w
Externí odkaz:
http://arxiv.org/abs/2302.14452
The generalization power of the pre-trained model is the key for few-shot deep learning. Dropout is a regularization technique used in traditional deep learning methods. In this paper, we explore the power of dropout on few-shot learning and provide
Externí odkaz:
http://arxiv.org/abs/2301.11015
Conventional training of deep neural networks usually requires a substantial amount of data with expensive human annotations. In this paper, we utilize the idea of meta-learning to explain two very different streams of few-shot learning, i.e., the ep
Externí odkaz:
http://arxiv.org/abs/2210.06409
Supervised learning frequently boils down to determining hidden and bright parameters in a parameterized hypothesis space based on finite input-output samples. The hidden parameters determine the attributions of hidden predictors or the nonlinear mec
Externí odkaz:
http://arxiv.org/abs/1803.08374
Publikováno v:
Proceeding of the 36th International Conference on Machine Learning (ICML), 2019
Deep learning has aroused extensive attention due to its great empirical success. The efficiency of the block coordinate descent (BCD) methods has been recently demonstrated in deep neural network (DNN) training. However, theoretical studies on their
Externí odkaz:
http://arxiv.org/abs/1803.00225
In this paper, we aim at developing scalable neural network-type learning systems. Motivated by the idea of "constructive neural networks" in approximation theory, we focus on "constructing" rather than "training" feed-forward neural networks (FNNs)
Externí odkaz:
http://arxiv.org/abs/1605.00079
Orthogonal greedy learning (OGL) is a stepwise learning scheme that starts with selecting a new atom from a specified dictionary via the steepest gradient descent (SGD) and then builds the estimator through orthogonal projection. In this paper, we fi
Externí odkaz:
http://arxiv.org/abs/1604.05993
The divide and conquer strategy, which breaks a massive data set into a se- ries of manageable data blocks, and then combines the independent results of data blocks to obtain a final decision, has been recognized as a state-of-the-art method to overc
Externí odkaz:
http://arxiv.org/abs/1601.06239