Zobrazeno 1 - 10
of 1 170
pro vyhledávání: '"Gardner, P. R."'
Autor:
Wenger, Jonathan, Wu, Kaiwen, Hennig, Philipp, Gardner, Jacob R., Pleiss, Geoff, Cunningham, John P.
Model selection in Gaussian processes scales prohibitively with the size of the training dataset, both in time and memory. While many approximations exist, all incur inevitable approximation error. Recent work accounts for this error in the form of c
Externí odkaz:
http://arxiv.org/abs/2411.01036
Autor:
Wu, Kaiwen, Gardner, Jacob R.
Elliptical slice sampling, when adapted to linearly truncated multivariate normal distributions, is a rejection-free Markov chain Monte Carlo method. At its core, it requires analytically constructing an ellipse-polytope intersection. The main novelt
Externí odkaz:
http://arxiv.org/abs/2407.10449
Autor:
Maus, Natalie, Kim, Kyurae, Pleiss, Geoff, Eriksson, David, Cunningham, John P., Gardner, Jacob R.
High-dimensional Bayesian optimization (BO) tasks such as molecular design often require 10,000 function evaluations before obtaining meaningful results. While methods like sparse variational Gaussian processes (SVGPs) reduce computational requiremen
Externí odkaz:
http://arxiv.org/abs/2406.04308
Autor:
Guo, Wentao, Long, Jikai, Zeng, Yimeng, Liu, Zirui, Yang, Xinyu, Ran, Yide, Gardner, Jacob R., Bastani, Osbert, De Sa, Christopher, Yu, Xiaodong, Chen, Beidi, Xu, Zhaozhuo
Zeroth-order optimization (ZO) is a memory-efficient strategy for fine-tuning Large Language Models using only forward passes. However, the application of ZO fine-tuning in memory-constrained settings such as mobile phones and laptops is still challe
Externí odkaz:
http://arxiv.org/abs/2406.02913
Autor:
Wu, Kaiwen, Gardner, Jacob R.
Stochastic natural gradient variational inference (NGVI) is a popular posterior inference method with applications in various probabilistic models. Despite its wide usage, little is known about the non-asymptotic convergence rate in the \emph{stochas
Externí odkaz:
http://arxiv.org/abs/2406.01870
Optimization objectives in the form of a sum of intractable expectations are rising in importance (e.g., diffusion models, variational autoencoders, and many more), a setting also known as "finite sum with infinite data." For these problems, a popula
Externí odkaz:
http://arxiv.org/abs/2406.00920
The expectation maximization (EM) algorithm is a widespread method for empirical Bayesian inference, but its expectation step (E-step) is often intractable. Employing a stochastic approximation scheme with Markov chain Monte Carlo (MCMC) can circumve
Externí odkaz:
http://arxiv.org/abs/2402.17870
Variational families with full-rank covariance approximations are known not to work well in black-box variational inference (BBVI), both empirically and theoretically. In fact, recent computational complexity results for BBVI have established that fu
Externí odkaz:
http://arxiv.org/abs/2401.10989
Autor:
Kasumba, Robert, Marticorena, Dom CP, Pahor, Anja, Ramani, Geetha, Goffney, Imani, Jaeggi, Susanne M, Seitz, Aaron, Gardner, Jacob R, Barbour, Dennis L
Cognitive modeling commonly relies on asking participants to complete a battery of varied tests in order to estimate attention, working memory, and other latent variables. In many cases, these tests result in highly variable observation models. A nea
Externí odkaz:
http://arxiv.org/abs/2312.09316
Training and inference in Gaussian processes (GPs) require solving linear systems with $n\times n$ kernel matrices. To address the prohibitive $\mathcal{O}(n^3)$ time complexity, recent work has employed fast iterative methods, like conjugate gradien
Externí odkaz:
http://arxiv.org/abs/2310.17137