Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Yadav, Chhavi"'
Machine unlearning is a key requirement of many data protection regulations such as GDPR. Prior work on unlearning has mostly considered superficial unlearning tasks where a single or a few related pieces of information are required to be removed. Ho
Externí odkaz:
http://arxiv.org/abs/2410.15153
Influence Functions are a standard tool for attributing predictions to training data in a principled manner and are widely used in applications such as data valuation and fairness. In this work, we present realistic incentives to manipulate influence
Externí odkaz:
http://arxiv.org/abs/2409.05208
Machine learning models are increasingly used in societal applications, yet legal and privacy concerns demand that they very often be kept confidential. Consequently, there is a growing distrust about the fairness properties of these models in the mi
Externí odkaz:
http://arxiv.org/abs/2402.12572
Bias auditing of language models (LMs) has received considerable attention as LMs are becoming widespread. As such, several benchmarks for bias auditing have been proposed. At the same time, the rapid evolution of LMs can make these benchmarks irrele
Externí odkaz:
http://arxiv.org/abs/2305.12620
Responsible use of machine learning requires models to be audited for undesirable properties. While a body of work has proposed using explanations for auditing, how to do so and why has remained relatively ill-understood. This work formalizes the rol
Externí odkaz:
http://arxiv.org/abs/2206.04740
Autor:
Yadav, Chhavi, Chaudhuri, Kamalika
Adoption of DL models in critical areas has led to an escalating demand for sound explanation methods. Instance-based explanation methods are a popular type that return selective instances from the training set to explain the predictions for a test s
Externí odkaz:
http://arxiv.org/abs/2109.06999
Publikováno v:
Proceedings of Machine Learning Research, 2019
Early detection is a crucial goal in the study of Alzheimer's Disease (AD). In this work, we describe several techniques to boost the performance of 3D deep convolutional neural networks (CNNs) trained to detect AD using structural brain MRI scans. S
Externí odkaz:
http://arxiv.org/abs/1911.03740
Autor:
Yadav, Chhavi, Bottou, Léon
Although the popular MNIST dataset [LeCun et al., 1994] is derived from the NIST database [Grother and Hanaoka, 1995], the precise processing steps for this derivation have been lost to time. We propose a reconstruction that is accurate enough to ser
Externí odkaz:
http://arxiv.org/abs/1905.10498
Auditing unwanted social bias in language models (LMs) is inherently hard due to the multidisciplinary nature of the work. In addition, the rapid evolution of LMs can make benchmarks irrelevant in no time. Bias auditing is further complicated by LM b
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::92b00aababe161cfddb5cfb2c7022dc7
http://arxiv.org/abs/2305.12620
http://arxiv.org/abs/2305.12620