Zobrazeno 1 - 10
of 236
pro vyhledávání: '"Balasubramanian, Vineeth"'
Detecting and measuring confounding effects from data is a key challenge in causal inference. Existing methods frequently assume causal sufficiency, disregarding the presence of unobserved confounding variables. Causal sufficiency is both unrealistic
Externí odkaz:
http://arxiv.org/abs/2409.17840
Autor:
Ramachandran, Rahul, Kulkarni, Tejal, Sharma, Charchit, Vijaykeerthy, Deepak, Balasubramanian, Vineeth N
Evaluating models and datasets in computer vision remains a challenging task, with most leaderboards relying solely on accuracy. While accuracy is a popular metric for model evaluation, it provides only a coarse assessment by considering a single mod
Externí odkaz:
http://arxiv.org/abs/2409.04041
Autor:
Chudasama, Vishal, Sarkar, Hiran, Wasnik, Pankaj, Balasubramanian, Vineeth N, Kalla, Jayateja
Object detection is a critical field in computer vision focusing on accurately identifying and locating specific objects in images or videos. Traditional methods for object detection rely on large labeled training datasets for each object category, w
Externí odkaz:
http://arxiv.org/abs/2408.14249
Autor:
Vashishtha, Aniket, Kumar, Abhinav, Reddy, Abbavaram Gowtham, Balasubramanian, Vineeth N, Sharma, Amit
For text-based AI systems to interact in the real world, causal reasoning is an essential skill. Since interventional data is costly to generate, we study to what extent an agent can learn causal reasoning from passive data. Specifically, we consider
Externí odkaz:
http://arxiv.org/abs/2407.07612
Autor:
Kuchibhotla, Hari Chandana, Kancheti, Sai Srinivas, Reddy, Abbavaram Gowtham, Balasubramanian, Vineeth N
Going beyond mere fine-tuning of vision-language models (VLMs), learnable prompt tuning has emerged as a promising, resource-efficient alternative. Despite their potential, effectively learning prompts faces the following challenges: (i) training in
Externí odkaz:
http://arxiv.org/abs/2405.07921
A vision-based drone-to-drone detection system is crucial for various applications like collision avoidance, countering hostile drones, and search-and-rescue operations. However, detecting drones presents unique challenges, including small object siz
Externí odkaz:
http://arxiv.org/abs/2404.19276
Deep learning methods have led to significant improvements in the performance on the facial landmark detection (FLD) task. However, detecting landmarks in challenging settings, such as head pose changes, exaggerated expressions, or uneven illuminatio
Externí odkaz:
http://arxiv.org/abs/2402.15044
This paper presents a novel concept learning framework for enhancing model interpretability and performance in visual classification tasks. Our approach appends an unsupervised explanation generator to the primary classifier network and makes use of
Externí odkaz:
http://arxiv.org/abs/2401.04647
For machine learning models to be reliable and trustworthy, their decisions must be interpretable. As these models find increasing use in safety-critical applications, it is important that not just the model predictions but also their explanations (a
Externí odkaz:
http://arxiv.org/abs/2312.10534
Autor:
Dayal, Aveen, B., Vimal K., Cenkeramaddi, Linga Reddy, Mohan, C. Krishna, Kumar, Abhinav, Balasubramanian, Vineeth N
Domain Generalization (DG) techniques have emerged as a popular approach to address the challenges of domain shift in Deep Learning (DL), with the goal of generalizing well to the target domain unseen during the training. In recent years, numerous me
Externí odkaz:
http://arxiv.org/abs/2311.08503