Zobrazeno 1 - 10
of 18
pro vyhledávání: '"Daxberger, Erik"'
Autor:
Ye, Hanrong, Zhang, Haotian, Daxberger, Erik, Chen, Lin, Lin, Zongyu, Li, Yanghao, Zhang, Bowen, You, Haoxuan, Xu, Dan, Gan, Zhe, Lu, Jiasen, Yang, Yinfei
This research aims to comprehensively explore building a multimodal foundation model for egocentric video understanding. To achieve this goal, we work on three fronts. First, as there is a lack of QA data for egocentric video understanding, we develo
Externí odkaz:
http://arxiv.org/abs/2410.07177
Autor:
Daxberger, Erik, Weers, Floris, Zhang, Bowen, Gunter, Tom, Pang, Ruoming, Eichner, Marcin, Emmersberger, Michael, Yang, Yinfei, Toshev, Alexander, Du, Xianzhi
Sparse Mixture-of-Experts models (MoEs) have recently gained popularity due to their ability to decouple model size from inference efficiency by only activating a small subset of the model parameters for any given input token. As such, sparse MoEs ha
Externí odkaz:
http://arxiv.org/abs/2309.04354
Autor:
Antorán, Javier, Janz, David, Allingham, James Urquhart, Daxberger, Erik, Barbano, Riccardo, Nalisnick, Eric, Hernández-Lobato, José Miguel
The linearised Laplace method for estimating model uncertainty has received renewed attention in the Bayesian deep learning community. The method provides reliable error bars and admits a closed-form expression for the model evidence, allowing for sc
Externí odkaz:
http://arxiv.org/abs/2206.08900
Deep neural networks are prone to overconfident predictions on outliers. Bayesian neural networks and deep ensembles have both been shown to mitigate this problem to some extent. In this work, we aim to combine the benefits of the two approaches by p
Externí odkaz:
http://arxiv.org/abs/2111.03577
Autor:
Daxberger, Erik, Kristiadi, Agustinus, Immer, Alexander, Eschenhagen, Runa, Bauer, Matthias, Hennig, Philipp
Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection. The Laplace approximation (LA) is a
Externí odkaz:
http://arxiv.org/abs/2106.14806
Autor:
Daxberger, Erik, Nalisnick, Eric, Allingham, James Urquhart, Antorán, Javier, Hernández-Lobato, José Miguel
The Bayesian paradigm has the potential to solve core issues of deep neural networks such as poor calibration and data inefficiency. Alas, scaling Bayesian inference to large weight spaces often requires restrictive approximations. In this work, we s
Externí odkaz:
http://arxiv.org/abs/2010.14689
Many important problems in science and engineering, such as drug design, involve optimizing an expensive black-box objective function over a complex, high-dimensional, and structured input space. Although machine learning techniques have shown promis
Externí odkaz:
http://arxiv.org/abs/2006.09191
Despite their successes, deep neural networks may make unreliable predictions when faced with test data drawn from a distribution different to that of the training data, constituting a major problem for AI safety. While this has recently motivated th
Externí odkaz:
http://arxiv.org/abs/1912.05651
Publikováno v:
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20), 2020, pages 2633-2639
The optimization of expensive to evaluate, black-box, mixed-variable functions, i.e. functions that have continuous and discrete inputs, is a difficult and yet pervasive problem in science and engineering. In Bayesian optimization (BO), special cases
Externí odkaz:
http://arxiv.org/abs/1907.01329
In recent years a number of large-scale triple-oriented knowledge graphs have been generated and various models have been proposed to perform learning in those graphs. Most knowledge graphs are static and reflect the world in its current state. In re
Externí odkaz:
http://arxiv.org/abs/1807.00228