Zobrazeno 1 - 10
of 3 956
pro vyhledávání: '"P. Asokan"'
Machine learning (ML) defenses protect against various risks to security, privacy, and fairness. Real-life models need simultaneous protection against multiple different risks which necessitates combining multiple defenses. But combining defenses wit
Externí odkaz:
http://arxiv.org/abs/2411.09776
One of the most common defense strategies against model poisoning in federated learning is to employ a robust aggregator mechanism that makes the training more resilient. Many of the existing Byzantine robust aggregators provide theoretical guarantee
Externí odkaz:
http://arxiv.org/abs/2411.03861
Autor:
Paliwal, Bhawna, Saini, Deepak, Dhawan, Mudit, Asokan, Siddarth, Natarajan, Nagarajan, Aggarwal, Surbhi, Malhotra, Pankaj, Jiao, Jian, Varma, Manik
Ranking a set of items based on their relevance to a given query is a core problem in search and recommendation. Transformer-based ranking models are the state-of-the-art approaches for such tasks, but they score each query-item independently, ignori
Externí odkaz:
http://arxiv.org/abs/2409.09795
Autor:
Benjamin, Joseph Geo, Asokan, Mothilal, Alhosani, Amna, Alasmawi, Hussain, Diehl, Werner Gerhard, Bricker, Leanne, Nandakumar, Karthik, Yaqub, Mohammad
Self-supervised learning (SSL) methods are popular since they can address situations with limited annotated data by directly utilising the underlying data distribution. However, the adoption of such methods is not explored enough in ultrasound (US) i
Externí odkaz:
http://arxiv.org/abs/2407.21738
A Federated Learning-Friendly Approach for Parameter-Efficient Fine-Tuning of SAM in 3D Segmentation
Adapting foundation models for medical image analysis requires finetuning them on a considerable amount of data because of extreme distribution shifts between natural (source) data used for pretraining and medical (target) data. However, collecting t
Externí odkaz:
http://arxiv.org/abs/2407.21739
Regulations increasingly call for various assurances from machine learning (ML) model providers about their training data, training process, and the behavior of resulting models during inference. For better transparency, companies (e.g., Huggingface
Externí odkaz:
http://arxiv.org/abs/2406.17548
Outsourced computation presents a risk to the confidentiality of clients' sensitive data since they have to trust that the service providers will not mishandle this data. Blinded Memory (BliMe) is a set of hardware extensions that addresses this prob
Externí odkaz:
http://arxiv.org/abs/2406.15302
Autor:
ElAtali, Hossam, Asokan, N.
Speculation is fundamental to achieving high CPU performance, yet it enables vulnerabilities such as Spectre attacks, which remain a significant challenge to mitigate without incurring substantial performance overheads. These attacks typically unfold
Externí odkaz:
http://arxiv.org/abs/2406.12110
Diffusion based text-to-image models are trained on large datasets scraped from the Internet, potentially containing unacceptable concepts (e.g., copyright infringing or unsafe). We need concept removal techniques (CRTs) which are effective in preven
Externí odkaz:
http://arxiv.org/abs/2404.19227
Autor:
Rangwani, Harsh, Mondal, Pradipto, Mishra, Mayank, Asokan, Ashish Ramayee, Babu, R. Venkatesh
Vision Transformer (ViT) has emerged as a prominent architecture for various computer vision tasks. In ViT, we divide the input image into patch tokens and process them through a stack of self attention blocks. However, unlike Convolutional Neural Ne
Externí odkaz:
http://arxiv.org/abs/2404.02900