Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Camuto, Alexander"'
Autor:
South, Tobin, Camuto, Alexander, Jain, Shrey, Nguyen, Shayla, Mahari, Robert, Paquin, Christian, Morton, Jason, Pentland, Alex 'Sandy'
In a world of increasing closed-source commercial machine learning models, model evaluations from developers must be taken at face value. These benchmark results-whether over task accuracy, bias evaluations, or safety checks-are traditionally impossi
Externí odkaz:
http://arxiv.org/abs/2402.02675
Autor:
Camuto, Alexander, Deligiannidis, George, Erdogdu, Murat A., Gürbüzbalaban, Mert, Şimşekli, Umut, Zhu, Lingjiong
Understanding generalization in deep learning has been one of the major challenges in statistical learning theory over the last decade. While recent work has illustrated that the dataset and the training algorithm must be taken into account in order
Externí odkaz:
http://arxiv.org/abs/2106.04881
Autor:
Camuto, Alexander, Willetts, Matthew
Publikováno v:
AISTATS 2022
In this work we study Variational Autoencoders (VAEs) from the perspective of harmonic analysis. By viewing a VAE's latent space as a Gaussian Space, a variety of measure space, we derive a series of results that show that the encoder variance of a V
Externí odkaz:
http://arxiv.org/abs/2105.14866
Publikováno v:
AISTATS 2022
We introduce an approach for training Variational Autoencoders (VAEs) that are certifiably robust to adversarial attack. Specifically, we first derive actionable bounds on the minimal size of an input perturbation required to change a VAE's reconstru
Externí odkaz:
http://arxiv.org/abs/2102.07559
Autor:
Camuto, Alexander, Wang, Xiaoyu, Zhu, Lingjiong, Holmes, Chris, Gürbüzbalaban, Mert, Şimşekli, Umut
Gaussian noise injections (GNIs) are a family of simple and widely-used regularisation methods for training neural networks, where one injects additive or multiplicative Gaussian noise to the network activations at every iteration of the optimisation
Externí odkaz:
http://arxiv.org/abs/2102.07006
Publikováno v:
AISTATS 2021
We make inroads into understanding the robustness of Variational Autoencoders (VAEs) to adversarial attacks and other input perturbations. While previous work has developed algorithmic approaches to attacking and defending VAEs, there remains a lack
Externí odkaz:
http://arxiv.org/abs/2007.07365
Publikováno v:
Advances in Neural Information Processing Systems 34 (2020)
We study the regularisation induced in neural networks by Gaussian noise injections (GNIs). Though such injections have been extensively studied when applied to data, there have been few studies on understanding the regularising effect they induce wh
Externí odkaz:
http://arxiv.org/abs/2007.07368
Publikováno v:
AISTATS 2021
Separating high-dimensional data like images into independent latent factors, i.e independent component analysis (ICA), remains an open research problem. As we show, existing probabilistic deep generative models (DGMs), which are tailor-made for imag
Externí odkaz:
http://arxiv.org/abs/2002.07766
We develop a new method for regularising neural networks. We learn a probability distribution over the activations of all layers of the model and then insert imputed values into the network during training. We obtain a posterior for an arbitrary subs
Externí odkaz:
http://arxiv.org/abs/1909.11507
Publikováno v:
International Conference on Learning Representations (ICLR) 2021
Variational autoencoders (VAEs) have recently been shown to be vulnerable to adversarial attacks, wherein they are fooled into reconstructing a chosen target image. However, how to defend against such attacks remains an open problem. We make signific
Externí odkaz:
http://arxiv.org/abs/1906.00230