Zobrazeno 1 - 10
of 20
pro vyhledávání: '"Thudi, Anvith"'
Autor:
Thudi, Anvith, Maddison, Chris J.
Training on mixtures of data distributions is now common in many modern machine learning pipelines, useful for performing well on several downstream tasks. Group distributionally robust optimization (group DRO) is one popular way to learn mixture wei
Externí odkaz:
http://arxiv.org/abs/2406.01477
Machine unlearning is a desirable operation as models get increasingly deployed on data with unknown provenance. However, achieving exact unlearning -- obtaining a model that matches the model distribution when the data to be forgotten was never used
Externí odkaz:
http://arxiv.org/abs/2402.00751
Spectral sparsification for directed Eulerian graphs is a key component in the design of fast algorithms for solving directed Laplacian linear systems. Directed Laplacian linear system solvers are crucial algorithmic primitives to fast computation of
Externí odkaz:
http://arxiv.org/abs/2311.06232
Differentially private stochastic gradient descent (DP-SGD) is the canonical approach to private deep learning. While the current privacy analysis of DP-SGD is known to be tight in some settings, several empirical results suggest that models trained
Externí odkaz:
http://arxiv.org/abs/2307.00310
Autor:
Rabanser, Stephan, Thudi, Anvith, Thakurta, Abhradeep, Dvijotham, Krishnamurthy, Papernot, Nicolas
Training reliable deep learning models which avoid making overconfident but incorrect predictions is a longstanding challenge. This challenge is further exacerbated when learning has to be differentially private: protection provided to sensitive data
Externí odkaz:
http://arxiv.org/abs/2305.18393
Autor:
Fang, Congyu, Jia, Hengrui, Thudi, Anvith, Yaghini, Mohammad, Choquette-Choo, Christopher A., Dullerud, Natalie, Chandrasekaran, Varun, Papernot, Nicolas
Proof-of-Learning (PoL) proposes that a model owner logs training checkpoints to establish a proof of having expended the computation necessary for training. The authors of PoL forego cryptographic approaches and trade rigorous security guarantees fo
Externí odkaz:
http://arxiv.org/abs/2208.03567
Selective classification is the task of rejecting inputs a model would predict incorrectly on through a trade-off between input space coverage and model accuracy. Current methods for selective classification impose constraints on either the model arc
Externí odkaz:
http://arxiv.org/abs/2205.13532
Differential Privacy (DP) is the de facto standard for reasoning about the privacy guarantees of a training algorithm. Despite the empirical observation that DP reduces the vulnerability of models to existing membership inference (MI) attacks, a theo
Externí odkaz:
http://arxiv.org/abs/2202.12232
Machine unlearning, i.e. having a model forget about some of its training data, has become increasingly more important as privacy legislation promotes variants of the right-to-be-forgotten. In the context of deep learning, approaches for machine unle
Externí odkaz:
http://arxiv.org/abs/2110.11891
Machine unlearning is the process through which a deployed machine learning model is made to forget about some of its training data points. While naively retraining the model from scratch is an option, it is almost always associated with large comput
Externí odkaz:
http://arxiv.org/abs/2109.13398