Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Ichikawa, Yuma"'
Autor:
Ichikawa, Yuma, Arai, Yamato
Learning-based methods have gained attention as general-purpose solvers because they can automatically learn problem-specific heuristics, reducing the need for manually crafted heuristics. However, these methods often face challenges with scalability
Externí odkaz:
http://arxiv.org/abs/2409.02135
Autor:
Namura, Nobuo, Ichikawa, Yuma
Recent advancements in time-series anomaly detection have relied on deep learning models to handle the diverse behaviors of time-series data. However, these models often suffer from unstable training and require extensive hyperparameter tuning, leadi
Externí odkaz:
http://arxiv.org/abs/2408.14756
Autor:
Ichikawa, Yuma, Iwashita, Hiroaki
Finding the best solution is a common objective in combinatorial optimization (CO). In practice, directly handling constraints is often challenging, incorporating them into the objective function as the penalties. However, balancing these penalties t
Externí odkaz:
http://arxiv.org/abs/2402.02190
This study proposes the "adaptive flip graph algorithm", which combines adaptive searches with the flip graph algorithm for finding fast and efficient methods for matrix multiplication. The adaptive flip graph algorithm addresses the inherent limitat
Externí odkaz:
http://arxiv.org/abs/2312.16960
Autor:
Ichikawa, Yuma, Hukushima, Koji
Variational autoencoders (VAEs) face a notorious problem wherein the variational posterior often aligns closely with the prior, a phenomenon known as posterior collapse, which hinders the quality of representation learning. To mitigate this problem,
Externí odkaz:
http://arxiv.org/abs/2310.15440
Autor:
Ichikawa, Yuma
Unsupervised learning (UL)-based solvers for combinatorial optimization (CO) train a neural network whose output provides a soft solution by directly optimizing the CO objective using a continuous relaxation strategy. These solvers offer several adva
Externí odkaz:
http://arxiv.org/abs/2309.16965
Autor:
Ichikawa, Yuma, Hukushima, Koji
In the Variational Autoencoder (VAE), the variational posterior often aligns closely with the prior, which is known as posterior collapse and hinders the quality of representation learning. To mitigate this problem, an adjustable hyperparameter beta
Externí odkaz:
http://arxiv.org/abs/2309.07663
Self-learning Monte Carlo (SLMC) methods are recently proposed to accelerate Markov chain Monte Carlo (MCMC) methods using a machine learning model. With latent generative models, SLMC methods realize efficient Monte Carlo updates with less autocorre
Externí odkaz:
http://arxiv.org/abs/2211.14024
Autor:
Ichikawa, Yuma, Hukushima, Koji
Publikováno v:
Journal of the Physical Society of Japan, 91, 114001 (2022)
Deep learning methods relying on multi-layered networks have been actively studied in a wide range of fields in recent years, and deep Boltzmann machines(DBMs) is one of them. In this study, a model of DBMs with some properites of weight parameters o
Externí odkaz:
http://arxiv.org/abs/2205.01272
Self-learning Monte Carlo (SLMC) methods are recently proposed to accelerate Markov chain Monte Carlo (MCMC) methods by using a machine learning model.With generative models having latent variables, SLMC methods realize efficient Monte Carlo updates
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::cd3abe6c3b365cc865723fcf2862f11e