Zobrazeno 1 - 10
of 25
pro vyhledávání: '"Subedar, Mahesh"'
Foundational vision transformer models have shown impressive few shot performance on many vision tasks. This research presents a novel investigation into the application of parameter efficient fine-tuning methods within an active learning (AL) framew
Externí odkaz:
http://arxiv.org/abs/2406.09296
This paper presents simple and efficient methods to mitigate sampling bias in active learning while achieving state-of-the-art accuracy and model robustness. We introduce supervised contrastive active learning by leveraging the contrastive loss for a
Externí odkaz:
http://arxiv.org/abs/2109.06321
We introduce supervised contrastive active learning (SCAL) and propose efficient query strategies in active learning based on the feature similarity (featuresim) and principal component analysis based feature-reconstruction error (fre) to select info
Externí odkaz:
http://arxiv.org/abs/2109.06873
In this paper, we propose an approach to improve image captioning solution for images with novel objects that do not have caption labels in the training dataset. We refer to our approach as Partially-Supervised Novel Object Captioning (PS-NOC). PS-NO
Externí odkaz:
http://arxiv.org/abs/2109.05115
In this paper, we study the impact of motion blur, a common quality flaw in real world images, on a state-of-the-art two-stage image captioning solution, and notice a degradation in solution performance as blur intensity increases. We investigate tec
Externí odkaz:
http://arxiv.org/abs/2106.05437
Bayesian deep neural networks (DNNs) can provide a mathematically grounded framework to quantify uncertainty in predictions from image captioning models. We propose a Bayesian variant of policy-gradient based reinforcement learning training technique
Externí odkaz:
http://arxiv.org/abs/2004.02435
Data poisoning attacks compromise the integrity of machine-learning models by introducing malicious training samples to influence the results during test time. In this work, we investigate backdoor data poisoning attack on deep neural networks (DNNs)
Externí odkaz:
http://arxiv.org/abs/1912.01206
Stochastic variational inference for Bayesian deep neural network (DNN) requires specifying priors and approximate posterior distributions over neural network weights. Specifying meaningful weight priors is a challenging problem, particularly for sca
Externí odkaz:
http://arxiv.org/abs/1906.05323
Deep neural networks (DNNs) provide state-of-the-art results for a multitude of applications, but the approaches using DNNs for multimodal audiovisual applications do not consider predictive uncertainty associated with individual modalities. Bayesian
Externí odkaz:
http://arxiv.org/abs/1811.10811
Uncertainty estimation in deep neural networks is essential for designing reliable and robust AI systems. Applications such as video surveillance for identifying suspicious activities are designed with deep neural networks (DNNs), but DNNs do not pro
Externí odkaz:
http://arxiv.org/abs/1811.03305