Zobrazeno 1 - 10
of 26
pro vyhledávání: '"Bui Thang D."'
Accurately quantifying uncertainty in large language models (LLMs) is crucial for their reliable deployment, especially in high-stakes applications. Current state-of-the-art methods for measuring semantic uncertainty in LLMs rely on strict bidirectio
Externí odkaz:
http://arxiv.org/abs/2410.22685
Autor:
Bui, Thang D.
Non-Gaussian likelihoods are essential for modelling complex real-world observations but pose significant computational challenges in learning and inference. Even with Gaussian priors, non-Gaussian likelihoods often lead to analytically intractable p
Externí odkaz:
http://arxiv.org/abs/2410.20754
Autor:
Bui Thang D., Pham Chi K., Pham Thang H., Hoang Long T., Nguyen Thich V., Vu Thang Q., Detels Roger
Publikováno v:
Bulletin of the World Health Organization, Vol 79, Iss 1, Pp 15-21 (2001)
OBJECTIVE: A cross-sectional survey was conducted in three districts of Quang Ninh province, Viet Nam, to find out what proportion of the people who lived there engaged in behaviour that put them at risk of becoming infected with HIV, and to measure
Externí odkaz:
https://doaj.org/article/70b78d16f9d6423cbde256c51cbf4d7c
Autor:
Ashman, Matthew, Bui, Thang D., Nguyen, Cuong V., Markou, Stratis, Weller, Adrian, Swaroop, Siddharth, Turner, Richard E.
The proliferation of computing devices has brought about an opportunity to deploy machine learning models on new problem domains using previously inaccessible data. Traditional algorithms for training such models often require data to be stored on a
Externí odkaz:
http://arxiv.org/abs/2202.12275
Through sequential construction of posteriors on observing data online, Bayes' theorem provides a natural framework for continual learning. We develop Variational Auto-Regressive Gaussian Processes (VAR-GPs), a principled posterior updating mechanism
Externí odkaz:
http://arxiv.org/abs/2006.05468
Autor:
Karaletsos, Theofanis, Bui, Thang D.
Probabilistic neural networks are typically modeled with independent weight priors, which do not capture weight correlations in the prior and do not provide a parsimonious interface to express properties in function space. A desirable class of priors
Externí odkaz:
http://arxiv.org/abs/2002.04033
In the continual learning setting, tasks are encountered sequentially. The goal is to learn whilst i) avoiding catastrophic forgetting, ii) efficiently using model capacity, and iii) employing forward and backward transfer learning. In this paper, we
Externí odkaz:
http://arxiv.org/abs/1905.02099
Partitioned Variational Inference: A unified framework encompassing federated and continual learning
Variational inference (VI) has become the method of choice for fitting many modern probabilistic models. However, practitioners are faced with a fragmented literature that offers a bewildering array of algorithmic options. First, the variational fami
Externí odkaz:
http://arxiv.org/abs/1811.11206
This paper develops variational continual learning (VCL), a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks. The framework can successfully tra
Externí odkaz:
http://arxiv.org/abs/1710.10628
Sparse pseudo-point approximations for Gaussian process (GP) models provide a suite of methods that support deployment of GPs in the large data regime and enable analytic intractabilities to be sidestepped. However, the field lacks a principled metho
Externí odkaz:
http://arxiv.org/abs/1705.07131