Zobrazeno 1 - 10
of 18
pro vyhledávání: '"Mangla, Puneet"'
Autor:
Mangla, Puneet, Chandhok, Shivam, Aggarwal, Milan, Balasubramanian, Vineeth N, Krishnamurthy, Balaji
For models to generalize under unseen domains (a.k.a domain generalization), it is crucial to learn feature representations that are domain-agnostic and capture the underlying semantics that makes up an object category. Recent advances towards weakly
Externí odkaz:
http://arxiv.org/abs/2206.05912
Recent progress towards designing models that can generalize to unseen domains (i.e domain generalization) or unseen classes (i.e zero-shot learning) has embarked interest towards building models that can tackle both domain-shift and semantic shift s
Externí odkaz:
http://arxiv.org/abs/2107.07497
Autor:
Mangla, Puneet, Kumari, Nupur, Singh, Mayank, Krishnamurthy, Balaji, Balasubramanian, Vineeth N
Recent advances in generative adversarial networks (GANs) have shown remarkable progress in generating high-quality images. However, this gain in performance depends on the availability of a large amount of training data. In limited data regimes, tra
Externí odkaz:
http://arxiv.org/abs/2012.04256
A Very recent trend has emerged to couple the notion of interpretability and adversarial robustness, unlike earlier efforts which solely focused on good interpretations or robustness against adversaries. Works have shown that adversarially trained mo
Externí odkaz:
http://arxiv.org/abs/2006.07828
The vicinal risk minimization (VRM) principle is an empirical risk minimization (ERM) variant that replaces Dirac masses with vicinal functions. There is strong numerical and theoretical evidence showing that VRM outperforms ERM in terms of generaliz
Externí odkaz:
http://arxiv.org/abs/2003.06566
Autor:
Singh, Mayank, Kumari, Nupur, Mangla, Puneet, Sinha, Abhishek, Balasubramanian, Vineeth N, Krishnamurthy, Balaji
Interpretability is an emerging area of research in trustworthy machine learning. Safe deployment of machine learning system mandates that the prediction and its explanation be reliable and robust. Recently, it has been shown that the explanations co
Externí odkaz:
http://arxiv.org/abs/1911.13073
Adversarial examples are fabricated examples, indistinguishable from the original image that mislead neural networks and drastically lower their performance. Recently proposed AdvGAN, a GAN based approach, takes input image as a prior for generating
Externí odkaz:
http://arxiv.org/abs/1908.00706
Autor:
Mangla, Puneet, Singh, Mayank, Sinha, Abhishek, Kumari, Nupur, Balasubramanian, Vineeth N, Krishnamurthy, Balaji
Few-shot learning algorithms aim to learn model parameters capable of adapting to unseen classes with the help of only a few labeled examples. A recent regularization technique - Manifold Mixup focuses on learning a general-purpose representation, ro
Externí odkaz:
http://arxiv.org/abs/1907.12087
Autor:
Mangla, Puneet, Kumari, Nupur, Singh, Mayank, Krishnamurthy, Balaji, Balasubramanian, Vineeth N
Publikováno v:
2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).
Recent advances in generative adversarial networks (GANs) have shown remarkable progress in generating high-quality images. However, this gain in performance depends on the availability of a large amount of training data. In limited data regimes, tra
Publikováno v:
In Pattern Recognition Letters December 2021 152:382-390