Zobrazeno 1 - 10
of 3 539
pro vyhledávání: '"Cunningham John A"'
Autor:
Wenger, Jonathan, Wu, Kaiwen, Hennig, Philipp, Gardner, Jacob R., Pleiss, Geoff, Cunningham, John P.
Model selection in Gaussian processes scales prohibitively with the size of the training dataset, both in time and memory. While many approximations exist, all incur inevitable approximation error. Recent work accounts for this error in the form of c
Externí odkaz:
http://arxiv.org/abs/2411.01036
Classic tree-based ensembles generalize better than any single decision tree. In contrast, recent empirical studies find that modern ensembles of (overparameterized) neural networks may not provide any inherent generalization advantage over single bu
Externí odkaz:
http://arxiv.org/abs/2410.16201
Autor:
Jesson, Andrew, Beltran-Velez, Nicolas, Chu, Quentin, Karlekar, Sweta, Kossen, Jannik, Gal, Yarin, Cunningham, John P., Blei, David
This paper presents a method for estimating the hallucination rate for in-context learning (ICL) with generative AI. In ICL, a conditional generative model (CGM) is prompted with a dataset and a prediction question and asked to generate a response. O
Externí odkaz:
http://arxiv.org/abs/2406.07457
Autor:
Maus, Natalie, Kim, Kyurae, Pleiss, Geoff, Eriksson, David, Cunningham, John P., Gardner, Jacob R.
High-dimensional Bayesian optimization (BO) tasks such as molecular design often require 10,000 function evaluations before obtaining meaningful results. While methods like sparse variational Gaussian processes (SVGPs) reduce computational requiremen
Externí odkaz:
http://arxiv.org/abs/2406.04308
Autor:
Biderman, Dan, Portes, Jacob, Ortiz, Jose Javier Gonzalez, Paul, Mansheej, Greengard, Philip, Jennings, Connor, King, Daniel, Havens, Sam, Chiley, Vitaliy, Frankle, Jonathan, Blakeney, Cody, Cunningham, John P.
Low-Rank Adaptation (LoRA) is a widely-used parameter-efficient finetuning method for large language models. LoRA saves memory by training only low rank perturbations to selected weight matrices. In this work, we compare the performance of LoRA and f
Externí odkaz:
http://arxiv.org/abs/2405.09673
Publikováno v:
NeurIPS 2023
Diffusion models have been successful on a range of conditional generation tasks including molecular design and text-to-image generation. However, these achievements have primarily depended on task-specific conditional training or error-prone heurist
Externí odkaz:
http://arxiv.org/abs/2306.17775
Publikováno v:
Appl. Phys. Lett. 124, 202407 (2024)
Magnetic skyrmions in thin films with perpendicular magnetic anisotropy are promising candidates for magnetic memory and logic devices, making the development of ways to transport skyrmions efficiently and precisely of significant interest. Here, we
Externí odkaz:
http://arxiv.org/abs/2305.16006
Classic results establish that encouraging predictive diversity improves performance in ensembles of low-capacity models, e.g. through bagging or boosting. Here we demonstrate that these intuitions do not apply to high-capacity neural network ensembl
Externí odkaz:
http://arxiv.org/abs/2302.00704
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior co
Externí odkaz:
http://arxiv.org/abs/2301.00537
Autor:
Loaiza-Ganem, Gabriel, Ross, Brendan Leigh, Wu, Luhuan, Cunningham, John P., Cresswell, Jesse C., Caterini, Anthony L.
Likelihood-based deep generative models have recently been shown to exhibit pathological behaviour under the manifold hypothesis as a consequence of using high-dimensional densities to model data with low-dimensional structure. In this paper we propo
Externí odkaz:
http://arxiv.org/abs/2212.01265