Zobrazeno 1 - 10
of 180
pro vyhledávání: '"SHENOY, PRADEEP"'
Learned reweighting (LRW) approaches to supervised learning use an optimization criterion to assign weights for training instances, in order to maximize performance on a representative validation dataset. We pose and formalize the problem of optimize
Externí odkaz:
http://arxiv.org/abs/2403.12236
Autor:
Tiwari, Rishabh, Sivasubramanian, Durga, Mekala, Anmol, Ramakrishnan, Ganesh, Shenoy, Pradeep
Deep networks tend to learn spurious feature-label correlations in real-world supervised learning tasks. This vulnerability is aggravated in distillation, where a student model may have lesser representational capacity than the corresponding teacher
Externí odkaz:
http://arxiv.org/abs/2310.18590
Deep neural networks have consistently shown great performance in several real-world use cases like autonomous vehicles, satellite imaging, etc., effectively leveraging large corpora of labeled training data. However, learning unbiased models depends
Externí odkaz:
http://arxiv.org/abs/2305.10643
Autor:
Tiwari, Rishabh, Shenoy, Pradeep
Simplicity bias is the concerning tendency of deep networks to over-depend on simple, weakly predictive features, to the exclusion of stronger, more complex features. This is exacerbated in real-world applications by limited training data and spuriou
Externí odkaz:
http://arxiv.org/abs/2301.13293
Concept bottleneck models (CBMs) are interpretable neural networks that first predict labels for human-interpretable concepts relevant to the prediction task, and then predict the final label based on the concept label predictions. We extend CBMs to
Externí odkaz:
http://arxiv.org/abs/2212.07430
Predictive uncertainty-a model's self awareness regarding its accuracy on an input-is key for both building robust models via training interventions and for test-time applications such as selective classification. We propose a novel instance-conditio
Externí odkaz:
http://arxiv.org/abs/2212.05987
Autor:
Jain, Nishant, Shenoy, Pradeep
Slow concept drift is a ubiquitous, yet under-studied problem in practical machine learning systems. In such settings, although recent data is more indicative of future data, naively prioritizing recent instances runs the risk of losing valuable info
Externí odkaz:
http://arxiv.org/abs/2212.05908
Recent work has shown that deep vision models tend to be overly dependent on low-level or "texture" features, leading to poor generalization. Various data augmentation strategies have been proposed to overcome this so-called texture bias in DNNs. We
Externí odkaz:
http://arxiv.org/abs/2211.07277
Autor:
Pal, Soumyabrata, Varshney, Prateek, Jain, Prateek, Thakurta, Abhradeep Guha, Madan, Gagan, Aggarwal, Gaurav, Shenoy, Pradeep, Srivastava, Gaurav
Personalization of machine learning (ML) predictions for individual users/domains/enterprises is critical for practical recommendation systems. Standard personalization approaches involve learning a user/domain specific embedding that is fed into a f
Externí odkaz:
http://arxiv.org/abs/2210.03505
Reliable outlier detection is critical for real-world deployment of deep learning models. Although extensively studied, likelihoods produced by deep generative models have been largely dismissed as being impractical for outlier detection. First, deep
Externí odkaz:
http://arxiv.org/abs/2208.13579