Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Selvam, Nikil"'
Tree-shaped graphical models are widely used for their tractability. However, they unfortunately lack expressive power as they require committing to a particular sparse dependency structure. We propose a novel class of generative models called mixtur
Externí odkaz:
http://arxiv.org/abs/2302.14202
With the increased use of machine learning systems for decision making, questions about the fairness properties of such systems start to take center stage. Most existing work on algorithmic fairness assume complete observation of features at predicti
Externí odkaz:
http://arxiv.org/abs/2212.02474
How reliably can we trust the scores obtained from social bias benchmarks as faithful indicators of problematic social biases in a given language model? In this work, we study this question by contrasting social biases with non-social biases stemming
Externí odkaz:
http://arxiv.org/abs/2210.10040
Autor:
Sivashankar, Varun, Selvam, Nikil
Catastrophic overfitting is a phenomenon observed during Adversarial Training (AT) with the Fast Gradient Sign Method (FGSM) where the test robustness steeply declines over just one epoch in the training stage. Prior work has attributed this loss in
Externí odkaz:
http://arxiv.org/abs/2111.10754
Publikováno v:
McKinsey Quarterly. 2020, Issue 4, p1-5. 5p. 1 Color Photograph.