Zobrazeno 1 - 10
of 219
pro vyhledávání: '"Aggarwal, Abhinav"'
Autor:
Aggarwal, Abhinav, Jang, Sun-Joo, Vardhan, Swarnima, Webber, Fabricio Malaguez, Alam, Md Mashiul, Vardhan, Madhurima, Lancaster, Gilead I., Ahmad, Yousif, Vora, Amit N., Zarich, Stuart W., Inglessis-Azuaje, Ignacio, Elmariah, Sammy, Forrest, John K., Davila, Carlos D.
Publikováno v:
In Structural Heart November 2024 8(6)
Autor:
Aggarwal, Abhinav, Kasiviswanathan, Shiva Prasad, Xu, Zekun, Feyisetan, Oluwaseyi, Teissier, Nathanael
Machine learning classifiers rely on loss functions for performance evaluation, often on a private (hidden) dataset. In a recent line of research, label inference was introduced as the problem of reconstructing the ground truth labels of this private
Externí odkaz:
http://arxiv.org/abs/2107.03022
Publikováno v:
In The American Journal of the Medical Sciences April 2024 367(4):235-242
Autor:
Aggarwal, Abhinav, Kasiviswanathan, Shiva Prasad, Xu, Zekun, Feyisetan, Oluwaseyi, Teissier, Nathanael
Log-loss (also known as cross-entropy loss) metric is ubiquitously used across machine learning applications to assess the performance of classification algorithms. In this paper, we investigate the problem of inferring the labels of a dataset from s
Externí odkaz:
http://arxiv.org/abs/2105.08266
Differentially-private mechanisms for text generation typically add carefully calibrated noise to input words and use the nearest neighbor to the noised input as the output word. When the noise is small in magnitude, these mechanisms are susceptible
Externí odkaz:
http://arxiv.org/abs/2104.11838
Accurately learning from user data while ensuring quantifiable privacy guarantees provides an opportunity to build better Machine Learning (ML) models while maintaining user trust. Recent literature has demonstrated the applicability of a generalized
Externí odkaz:
http://arxiv.org/abs/2012.05403
Autor:
Aggarwal, Abhinav1 (AUTHOR) abhinav.aggarwal@yale.edu, Stolear, Anton2 (AUTHOR) anton.stolear@yale.edu, Alam, Md Mashiul1 (AUTHOR) mdmashiul.alam@bpthosp.org, Vardhan, Swarnima1 (AUTHOR) swarnima.vardhan@bpthosp.org, Dulgher, Maxim3 (AUTHOR) maxim.dulgher@nuvancehealth.org, Jang, Sun-Joo4 (AUTHOR) sun-joo.jang@yale.edu, Zarich, Stuart W.2 (AUTHOR) dr.stuart.zarich@bpthosp.org
Publikováno v:
Journal of Clinical Medicine. Mar2024, Vol. 13 Issue 6, p1781. 26p.
Balancing the privacy-utility tradeoff is a crucial requirement of many practical machine learning systems that deal with sensitive customer data. A popular approach for privacy-preserving text analysis is noise injection, in which text data is first
Externí odkaz:
http://arxiv.org/abs/2010.11947
Deep Neural Networks, despite their great success in diverse domains, are provably sensitive to small perturbations on correctly classified examples and lead to erroneous predictions. Recently, it was proposed that this behavior can be combatted by o
Externí odkaz:
http://arxiv.org/abs/2009.12718
Membership Inference Attacks exploit the vulnerabilities of exposing models trained on customer data to queries by an adversary. In a recently proposed implementation of an auditing tool for measuring privacy leakage from sensitive datasets, more ref
Externí odkaz:
http://arxiv.org/abs/2009.08559