Zobrazeno 1 - 10
of 87
pro vyhledávání: '"Nayak, Gaurav"'
Worldwide Geo-localization aims to pinpoint the precise location of images taken anywhere on Earth. This task has considerable challenges due to immense variation in geographic landscapes. The image-to-image retrieval-based approaches fail to solve t
Externí odkaz:
http://arxiv.org/abs/2309.16020
With the increasing deployment of deep neural networks in safety-critical applications such as self-driving cars, medical imaging, anomaly detection, etc., adversarial robustness has become a crucial concern in the reliability of these networks in re
Externí odkaz:
http://arxiv.org/abs/2309.05132
Federated learning is a promising direction to tackle the privacy issues related to sharing patients' sensitive data. Often, federated systems in the medical image analysis domain assume that the participating local clients are \textit{honest}. Sever
Externí odkaz:
http://arxiv.org/abs/2308.07387
Federated Learning (FL) is a machine learning paradigm that enables clients to jointly train a global model by aggregating the locally trained models without sharing any local training data. In practice, there can often be substantial heterogeneity (
Externí odkaz:
http://arxiv.org/abs/2305.19600
Black-box adversarial attacks present a realistic threat to action recognition systems. Existing black-box attacks follow either a query-based approach where an attack is optimized by querying the target model, or a transfer-based approach where atta
Externí odkaz:
http://arxiv.org/abs/2211.13171
The high cost of acquiring and annotating samples has made the `few-shot' learning problem of prime importance. Existing works mainly focus on improving performance on clean data and overlook robustness concerns on the data perturbed with adversarial
Externí odkaz:
http://arxiv.org/abs/2211.01598
Several companies often safeguard their trained deep models (i.e., details of architecture, learnt weights, training details etc.) from third-party users by exposing them only as black boxes through APIs. Moreover, they may not even provide access to
Externí odkaz:
http://arxiv.org/abs/2211.01579
Certified defense using randomized smoothing is a popular technique to provide robustness guarantees for deep neural networks against l2 adversarial attacks. Existing works use this technique to provably secure a pretrained non-robust model by traini
Externí odkaz:
http://arxiv.org/abs/2210.08929
Adversarial attack perturbs an image with an imperceptible noise, leading to incorrect model prediction. Recently, a few works showed inherent bias associated with such attack (robustness bias), where certain subgroups in a dataset (e.g. based on cla
Externí odkaz:
http://arxiv.org/abs/2205.02604
Deep models are highly susceptible to adversarial attacks. Such attacks are carefully crafted imperceptible noises that can fool the network and can cause severe consequences when deployed. To encounter them, the model requires training data for adve
Externí odkaz:
http://arxiv.org/abs/2204.01568