Zobrazeno 1 - 10
of 1 538
pro vyhledávání: '"Srikanth, V"'
Autor:
Chang, Xiangyu, Ahmed, Sk Miraj, Krishnamurthy, Srikanth V., Guler, Basak, Swami, Ananthram, Oymak, Samet, Roy-Chowdhury, Amit K.
The key premise of federated learning (FL) is to train ML models across a diverse set of data-owners (clients), without exchanging local data. An overarching challenge to this date is client heterogeneity, which may arise not only from variations in
Externí odkaz:
http://arxiv.org/abs/2402.08769
Autor:
Chang, Xiangyu, Ahmed, Sk Miraj, Krishnamurthy, Srikanth V., Guler, Basak, Swami, Ananthram, Oymak, Samet, Roy-Chowdhury, Amit K.
Parameter-efficient tuning (PET) methods such as LoRA, Adapter, and Visual Prompt Tuning (VPT) have found success in enabling adaptation to new domains by tuning small modules within a transformer model. However, the number of domains encountered dur
Externí odkaz:
http://arxiv.org/abs/2401.04130
Publikováno v:
IIMT Journal of Management, 2024, Vol. 1, Issue 2, pp. 286-300.
Autor:
Aich, Abhishek, Ta, Calvin-Khang, Gupta, Akash, Song, Chengyu, Krishnamurthy, Srikanth V., Asif, M. Salman, Roy-Chowdhury, Amit K.
The majority of methods for crafting adversarial attacks have focused on scenes with a single dominant object (e.g., images from ImageNet). On the other hand, natural scenes include multiple dominant objects that are semantically related. Thus, it is
Externí odkaz:
http://arxiv.org/abs/2209.09502
Autor:
Aich, Abhishek, Li, Shasha, Song, Chengyu, Asif, M. Salman, Krishnamurthy, Srikanth V., Roy-Chowdhury, Amit K.
State-of-the-art generative model-based attacks against image classifiers overwhelmingly focus on single-object (i.e., single dominant object) images. Different from such settings, we tackle a more practical problem of generating adversarial perturba
Externí odkaz:
http://arxiv.org/abs/2209.09883
Autor:
Cai, Zikui, Rane, Shantanu, Brito, Alejandro E., Song, Chengyu, Krishnamurthy, Srikanth V., Roy-Chowdhury, Amit K., Asif, M. Salman
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results. A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check, wherein,
Externí odkaz:
http://arxiv.org/abs/2203.15230
Autor:
Cai, Zikui, Xie, Xinxin, Li, Shasha, Yin, Mingjun, Song, Chengyu, Krishnamurthy, Srikanth V., Roy-Chowdhury, Amit K., Asif, M. Salman
Blackbox transfer attacks for image classifiers have been extensively studied in recent years. In contrast, little progress has been made on transfer attacks for object detectors. Object detectors take a holistic view of the image and the detection o
Externí odkaz:
http://arxiv.org/abs/2112.03223
Autor:
Srikanth, V. V. V. S. S. P. S.1 srikanth.vvvs@gmail.com, Ramesh, S.1 ramesh.sirisetti@gmail.com, Ratnamani, M. V.2 vvratnamani@gmail.com, Shum, K. P.3
Publikováno v:
Southeast Asian Bulletin of Mathematics. 2024, Vol. 48 Issue 4, p569-578. 10p.
Autor:
Yin, Mingjun, Li, Shasha, Song, Chengyu, Asif, M. Salman, Roy-Chowdhury, Amit K., Krishnamurthy, Srikanth V.
Deep Neural Networks (DNNs) have been shown to be vulnerable to adversarial examples, which are slightly perturbed input images which lead DNNs to make wrong predictions. To protect from such examples, various defense strategies have been proposed. A
Externí odkaz:
http://arxiv.org/abs/2110.12321
Autor:
Li, Shasha, Aich, Abhishek, Zhu, Shitong, Asif, M. Salman, Song, Chengyu, Roy-Chowdhury, Amit K., Krishnamurthy, Srikanth V.
When compared to the image classification models, black-box adversarial attacks against video classification models have been largely understudied. This could be possible because, with video, the temporal dimension poses significant additional challe
Externí odkaz:
http://arxiv.org/abs/2110.01823