Zobrazeno 1 - 10
of 29
pro vyhledávání: '"Pydi, Muni Sreenivas"'
Recent research has explored the memorization capacity of multi-head attention, but these findings are constrained by unrealistic limitations on the context size. We present a novel proof for language-based Transformers that extends the current hypot
Externí odkaz:
http://arxiv.org/abs/2411.10115
Performative learning addresses the increasingly pervasive situations in which algorithmic decisions may induce changes in the data distribution as a consequence of their public deployment. We propose a novel view in which these performative effects
Externí odkaz:
http://arxiv.org/abs/2411.02023
Autor:
Sebag, Ilana, Pydi, Muni Sreenivas, Franceschi, Jean-Yves, Rakotomamonjy, Alain, Gartrell, Mike, Atif, Jamal, Allauzen, Alexandre
Safeguarding privacy in sensitive training data is paramount, particularly in the context of generative modeling. This can be achieved through either differentially private stochastic gradient descent or a differentially private metric for training m
Externí odkaz:
http://arxiv.org/abs/2312.08227
Rejection sampling methods have recently been proposed to improve the performance of discriminator-based generative models. However, these methods are only optimal under an unlimited sampling budget, and are usually applied to a generator trained ind
Externí odkaz:
http://arxiv.org/abs/2311.00460
Achieving a balance between image quality (precision) and diversity (recall) is a significant challenge in the domain of generative models. Current state-of-the-art models primarily rely on optimizing heuristics, such as the Fr\'echet Inception Dista
Externí odkaz:
http://arxiv.org/abs/2305.18910
Autor:
Gnecco-Heredia, Lucas, Chevaleyre, Yann, Negrevergne, Benjamin, Meunier, Laurent, Pydi, Muni Sreenivas
Deep neural networks are known to be vulnerable to small adversarial perturbations in test data. To defend against adversarial attacks, probabilistic classifiers have been proposed as an alternative to deterministic ones. However, literature has conf
Externí odkaz:
http://arxiv.org/abs/2302.07221
Generative models can have distinct mode of failures like mode dropping and low quality samples, which cannot be captured by a single scalar metric. To address this, recent works propose evaluating generative models using precision and recall, where
Externí odkaz:
http://arxiv.org/abs/2302.00628
A new variant of Newton's method for empirical risk minimization is studied, where at each iteration of the optimization algorithm, the gradient and Hessian of the objective function are replaced by robust estimators taken from existing literature on
Externí odkaz:
http://arxiv.org/abs/2301.13192
Autor:
Pydi, Muni Sreenivas, Jog, Varun
Adversarial risk quantifies the performance of classifiers on adversarially perturbed data. Numerous definitions of adversarial risk -- not all mathematically rigorous and differing subtly in the details -- have appeared in the literature. In this pa
Externí odkaz:
http://arxiv.org/abs/2201.08956
Autor:
Pydi, Muni Sreenivas, Jog, Varun
Modern machine learning algorithms perform poorly on adversarially manipulated data. Adversarial risk quantifies the error of classifiers in adversarial settings; adversarial classifiers minimize adversarial risk. In this paper, we analyze adversaria
Externí odkaz:
http://arxiv.org/abs/1912.02794