Zobrazeno 1 - 10
of 52
pro vyhledávání: '"Dillon, Joshua V"'
Autor:
Birodkar, Vighnesh, Barcik, Gabriel, Lyon, James, Ioffe, Sergey, Minnen, David, Dillon, Joshua V.
For learned image representations, basic autoencoders often produce blurry results. Reconstruction quality can be improved by incorporating additional penalties such as adversarial (GAN) and perceptual losses. Arguably, these approaches lack a princi
Externí odkaz:
http://arxiv.org/abs/2409.02529
Autor:
Streeter, Matthew, Dillon, Joshua V.
It is often useful to have polynomial upper or lower bounds on a one-dimensional function that are valid over a finite interval, called a trust region. A classical way to produce polynomial bounds of degree $k$ involves bounding the range of the $k$t
Externí odkaz:
http://arxiv.org/abs/2308.00679
Autor:
Vedadi, Elahe, Dillon, Joshua V., Mansfield, Philip Andrew, Singhal, Karan, Afkanpour, Arash, Morningstar, Warren Richard
Conventional federated learning algorithms train a single global model by leveraging all participating clients' data. However, due to heterogeneity in client generative distributions and predictive models, these approaches may not appropriately appro
Externí odkaz:
http://arxiv.org/abs/2305.13672
Autor:
Streeter, Matthew, Dillon, Joshua V.
We present a new algorithm for automatically bounding the Taylor remainder series. In the special case of a scalar function $f: \mathbb{R} \to \mathbb{R}$, our algorithm takes as input a reference point $x_0$, trust region $[a, b]$, and integer $k \g
Externí odkaz:
http://arxiv.org/abs/2212.11429
Autor:
Ruan, Yangjun, Singh, Saurabh, Morningstar, Warren, Alemi, Alexander A., Ioffe, Sergey, Fischer, Ian, Dillon, Joshua V.
Ensembling has proven to be a powerful technique for boosting model performance, uncertainty estimation, and robustness in supervised learning. Advances in self-supervised learning (SSL) enable leveraging large unlabeled corpora for state-of-the-art
Externí odkaz:
http://arxiv.org/abs/2211.09981
Autor:
Dillon, Joshua V.
m-Estimation represents a broad class of estimators, including least-squares and maximum likelihood, and is a widely used tool for statistical inference. Its successful application however, often requires negotiating physical resources for desired le
Externí odkaz:
http://hdl.handle.net/1853/42913
In discriminative settings such as regression and classification there are two random variables at play, the inputs X and the targets Y. Here, we demonstrate that the Variational Information Bottleneck can be viewed as a compromise between fully empi
Externí odkaz:
http://arxiv.org/abs/2011.08711
Publikováno v:
International Conference on Artificial Intelligence and Statistics, 8270-8298, (2022)
The Bayesian posterior minimizes the "inferential risk" which itself bounds the "predictive risk". This bound is tight when the likelihood and prior are well-specified. However since misspecification induces a gap, the Bayesian posterior predictive d
Externí odkaz:
http://arxiv.org/abs/2010.09629
Autor:
Morningstar, Warren R., Ham, Cusuh, Gallagher, Andrew G., Lakshminarayanan, Balaji, Alemi, Alexander A., Dillon, Joshua V.
Perhaps surprisingly, recent studies have shown probabilistic model likelihoods have poor specificity for out-of-distribution (OOD) detection and often assign higher likelihoods to OOD data than in-distribution data. To ameliorate this issue we propo
Externí odkaz:
http://arxiv.org/abs/2006.09273
Automatic Differentiation Variational Inference (ADVI) is a useful tool for efficiently learning probabilistic models in machine learning. Generally approximate posteriors learned by ADVI are forced to be unimodal in order to facilitate use of the re
Externí odkaz:
http://arxiv.org/abs/2003.01687