Zobrazeno 1 - 10
of 1 230
pro vyhledávání: '"HAYAKAWA, SATOSHI"'
Diffusion models have demonstrated exceptional performances in various fields of generative modeling. While they often outperform competitors including VAEs and GANs in sample quality and diversity, they suffer from slow sampling speed due to their i
Externí odkaz:
http://arxiv.org/abs/2410.08709
Diffusion models have seen notable success in continuous domains, leading to the development of discrete diffusion models (DDMs) for discrete variables. Despite recent advances, DDMs face the challenge of slow sampling speeds. While parallel sampling
Externí odkaz:
http://arxiv.org/abs/2410.07761
Autor:
Adachi, Masaki, Hayakawa, Satoshi, Jørgensen, Martin, Hamid, Saad, Oberhauser, Harald, Osborne, Michael A.
Parallelisation in Bayesian optimisation is a common strategy but faces several challenges: the need for flexibility in acquisition functions and kernel choices, flexibility dealing with discrete and continuous variables simultaneously, model misspec
Externí odkaz:
http://arxiv.org/abs/2404.12219
Autor:
Hayakawa, Satoshi, Morimura, Tetsuro
Reward evaluation of episodes becomes a bottleneck in a broad range of reinforcement learning tasks. Our aim in this paper is to select a small but representative subset of a large batch of episodes, only on which we actually compute rewards for more
Externí odkaz:
http://arxiv.org/abs/2310.14768
Autor:
Adachi, Masaki, Hayakawa, Satoshi, Jørgensen, Martin, Wan, Xingchen, Nguyen, Vu, Oberhauser, Harald, Osborne, Michael A.
Publikováno v:
AISTATS 238, 496-504, 2024
Active learning parallelization is widely used, but typically relies on fixing the batch size throughout experimentation. This fixed approach is inefficient because of a dynamic trade-off between cost and speed -- larger batches are more costly, smal
Externí odkaz:
http://arxiv.org/abs/2306.05843
Publikováno v:
Proceedings of the 40th International Conference on Machine Learning (ICML2023) https://proceedings.mlr.press/v202/yamasaki23a.html
A significant challenge in the field of quantum machine learning (QML) is to establish applications of quantum computation to accelerate common tasks in machine learning such as those for neural networks. Ridgelet transform has been a fundamental mat
Externí odkaz:
http://arxiv.org/abs/2301.11936
Autor:
Adachi, Masaki, Hayakawa, Satoshi, Hamid, Saad, Jørgensen, Martin, Oberhauser, Harald, Osborne, Micheal A.
Batch Bayesian optimisation and Bayesian quadrature have been shown to be sample-efficient methods of performing optimisation and quadrature where expensive-to-evaluate objective functions can be queried in parallel. However, current methods do not s
Externí odkaz:
http://arxiv.org/abs/2301.11832
We analyze the Nystr\"om approximation of a positive definite kernel associated with a probability measure. We first prove an improved error bound for the conventional Nystr\"om approximation with i.i.d. sampling and singular-value decomposition in t
Externí odkaz:
http://arxiv.org/abs/2301.09517
Publikováno v:
Proceedings of the Royal Society A, 2023
Given a probability measure $\mu$ on a set $\mathcal{X}$ and a vector-valued function $\varphi$, a common problem is to construct a discrete probability measure on $\mathcal{X}$ such that the push-forward of these two probability measures under $\var
Externí odkaz:
http://arxiv.org/abs/2210.05787
Autor:
Adachi, Masaki, Hayakawa, Satoshi, Jørgensen, Martin, Oberhauser, Harald, Osborne, Michael A.
Publikováno v:
NeurIPS 35, 16533--16547 (2022)
Calculation of Bayesian posteriors and model evidences typically requires numerical integration. Bayesian quadrature (BQ), a surrogate-model-based approach to numerical integration, is capable of superb sample efficiency, but its lack of parallelisat
Externí odkaz:
http://arxiv.org/abs/2206.04734