Zobrazeno 1 - 10
of 265
pro vyhledávání: '"Kanamori, Takafumi"'
This paper studies stable learning methods for generative models that enable high-quality data generation. Noise injection is commonly used to stabilize learning. However, selecting a suitable noise distribution is challenging. Diffusion-GAN, a recen
Externí odkaz:
http://arxiv.org/abs/2410.20780
In statistical inference, we commonly assume that samples are independent and identically distributed from a probability distribution included in a pre-specified statistical model. However, such an assumption is often violated in practice. Even an un
Externí odkaz:
http://arxiv.org/abs/2410.20760
Autor:
Irobe, Hiroo, Aoki, Wataru, Yamazaki, Kimihiro, Zhang, Yuhui, Nakagawa, Takumi, Waida, Hiroki, Wada, Yuichiro, Kanamori, Takafumi
Advancing defensive mechanisms against adversarial attacks in generative models is a critical research topic in machine learning. Our study focuses on a specific type of generative models - Variational Auto-Encoders (VAEs). Contrary to common beliefs
Externí odkaz:
http://arxiv.org/abs/2407.18632
We study policy evaluation of offline contextual bandits subject to unobserved confounders. Sensitivity analysis methods are commonly used to estimate the policy value under the worst-case confounding over a given uncertainty set. However, existing w
Externí odkaz:
http://arxiv.org/abs/2309.12450
Autor:
Nakagawa, Takumi, Sanada, Yutaro, Waida, Hiroki, Zhang, Yuhui, Wada, Yuichiro, Takanashi, Kōsaku, Yamada, Tomonori, Kanamori, Takafumi
Representation learning has been increasing its impact on the research and practice of machine learning, since it enables to learn representations that can apply to various downstream tasks efficiently. However, recent works pay little attention to t
Externí odkaz:
http://arxiv.org/abs/2304.09552
Autor:
Waida, Hiroki, Wada, Yuichiro, Andéol, Léo, Nakagawa, Takumi, Zhang, Yuhui, Kanamori, Takafumi
Contrastive learning is an efficient approach to self-supervised representation learning. Although recent studies have made progress in the theoretical understanding of contrastive learning, the investigation of how to characterize the clusters of th
Externí odkaz:
http://arxiv.org/abs/2304.00395
We consider the scenario of deep clustering, in which the available prior knowledge is limited. In this scenario, few existing state-of-the-art deep clustering methods can perform well for both non-complex topology and complex topology datasets. To a
Externí odkaz:
http://arxiv.org/abs/2303.03036
Autor:
Andeol, Léo, Kawakami, Yusei, Wada, Yuichiro, Kanamori, Takafumi, Müller, Klaus-Robert, Montavon, Grégoire
Domain shifts in the training data are common in practical applications of machine learning; they occur for instance when the data is coming from different sources. Ideally, a ML model should work well independently of these shifts, for example, by l
Externí odkaz:
http://arxiv.org/abs/2106.04923
Autor:
Nakagawa, Takumi, Sanada, Yutaro, Waida, Hiroki, Zhang, Yuhui, Wada, Yuichiro, Takanashi, Kōsaku, Yamada, Tomonori, Kanamori, Takafumi
Publikováno v:
In Neural Networks January 2024 169:226-241
Autor:
Andéol, Léo, Kawakami, Yusei, Wada, Yuichiro, Kanamori, Takafumi, Müller, Klaus-Robert, Montavon, Grégoire
Publikováno v:
In Neural Networks October 2023 167:233-243