Zobrazeno 1 - 10
of 111 677
pro vyhledávání: '"So Sugiyama"'
Reinforcement Learning (RL) empowers agents to acquire various skills by learning from reward signals. Unfortunately, designing high-quality instance-level rewards often demands significant effort. An emerging alternative, RL with delayed reward, foc
Externí odkaz:
http://arxiv.org/abs/2410.20176
Listening to audio content, such as podcasts and audiobooks, is one of the ways people engage with knowledge. Listening affords people more mobility than reading by seeing, thus broadening learning opportunities. This study explores the potential app
Externí odkaz:
http://arxiv.org/abs/2410.15023
Black-box optimization algorithms have been widely used in various machine learning problems, including reinforcement learning and prompt fine-tuning. However, directly optimizing the training loss value, as commonly done in existing black-box optimi
Externí odkaz:
http://arxiv.org/abs/2410.12457
Autor:
Aritome, S., Futatsukawa, K., Hara, H., Hayasaka, K., Ibaraki, Y., Ichikawa, T., Iijima, T., Iinuma, H., Ikedo, Y., Imai, Y., Inami, K., Ishida, K., Kamal, S., Kamioka, S., Kawamura, N., Kimura, M., Koda, A., Koji, S., Kojima, K., Kondo, A., Kondo, Y., Kuzuba, M., Matsushita, R., Mibe, T., Miyamoto, Y., Nakamura, J. G., Nakazawa, Y., Ogawa, S., Okazaki, Y., Otani, M., Oyama, S., Saito, N., Sato, H., Sato, T., Sato, Y., Shimomura, K., Shioya, Z., Strasser, P., Sugiyama, S., Sumi, K., Suzuki, K., Takeuchi, Y., Tanida, M., Tojo, J., Ueda, K., Uetake, S., Xie, X. H., Yamada, M., Yamamoto, S., Yamazaki, T., Yamura, K., Yoshida, M., Yoshioka, T., Yotsuzuka, M.
Acceleration of positive muons from thermal energy to $100~$keV has been demonstrated. Thermal muons were generated by resonant multi-photon ionization of muonium atoms emitted from a sheet of laser-ablated aerogel. The thermal muons were first elect
Externí odkaz:
http://arxiv.org/abs/2410.11367
Autor:
Enouen, James, Sugiyama, Mahito
The log-linear model has received a significant amount of theoretical attention in previous decades and remains the fundamental tool used for learning probability distributions over discrete variables. Despite its large popularity in statistical mech
Externí odkaz:
http://arxiv.org/abs/2410.11964
Autor:
Sunada, Yoshiki, Kono, Shingo, Ilves, Jesper, Sugiyama, Takanori, Suzuki, Yasunari, Okubo, Tsuyoshi, Tamate, Shuhei, Tabuchi, Yutaka, Nakamura, Yasunobu
Entanglement among a large number of qubits is a crucial resource for many quantum algorithms. Such many-body states have been efficiently generated by entangling a chain of itinerant photonic qubits in the optical or microwave domain. However, it ha
Externí odkaz:
http://arxiv.org/abs/2410.03345
Large language models (LLMs) have achieved impressive success in text-formatted learning problems, and most popular LLMs have been deployed in a black-box fashion. Meanwhile, fine-tuning is usually necessary for a specific downstream task to obtain b
Externí odkaz:
http://arxiv.org/abs/2410.03124
Reconstructing probability distributions from experimental data is a crucial problem across various fields. An effective approach is to optimize a theoretical or computational model of the distribution under an objective functional that evaluates con
Externí odkaz:
http://arxiv.org/abs/2410.01499
Autor:
Hu, Pingbang, Sugiyama, Mahito
We propose a novel and interpretable data augmentation method based on energy-based modeling and principles from information geometry. Unlike black-box generative models, which rely on deep neural networks, our approach replaces these non-interpretab
Externí odkaz:
http://arxiv.org/abs/2410.00718
Recent advances in fine-tuning Vision-Language Models (VLMs) have witnessed the success of prompt tuning and adapter tuning, while the classic model fine-tuning on inherent parameters seems to be overlooked. It is believed that fine-tuning the parame
Externí odkaz:
http://arxiv.org/abs/2409.16718