Zobrazeno 1 - 10
of 173
pro vyhledávání: '"Taki, Masato"'
Autor:
He, Yan, Drozd, Vasyl, Ekawa, Hiroyuki, Escrig, Samuel, Gao, Yiming, Kasagi, Ayumi, Liu, Enqiang, Muneem, Abdul, Nakagawa, Manami, Nakazawa, Kazuma, Rappold, Christophe, Saito, Nami, Saito, Takehiko R., Sugimoto, Shohei, Taki, Masato, Tanaka, Yoshiki K., Wang, He, Yanai, Ayari, Yoshida, Junya, Zhang, Hongfei
A novel method was developed to detect double-$\Lambda$ hypernuclear events in nuclear emulsions using machine learning techniques. The object detection model, the Mask R-CNN, was trained using images generated by Monte Carlo simulations, image proce
Externí odkaz:
http://arxiv.org/abs/2409.01657
Transformers have established themselves as the leading neural network model in natural language processing and are increasingly foundational in various domains. In vision, the MLP-Mixer model has demonstrated competitive performance, suggesting that
Externí odkaz:
http://arxiv.org/abs/2406.12220
Autor:
Yasuki, Shunsuke, Taki, Masato
Recently, convolutional neural networks (CNNs) with large size kernels have attracted much attention in the computer vision field, following the success of the Vision Transformers. Large kernel CNNs have been reported to perform well in downstream vi
Externí odkaz:
http://arxiv.org/abs/2403.06676
Autor:
Ota, Toshihiro, Taki, Masato
In the last few years, the success of Transformers in computer vision has stimulated the discovery of many alternative models that compete with Transformers, such as the MLP-Mixer. Despite their weak inductive bias, these models have achieved perform
Externí odkaz:
http://arxiv.org/abs/2304.13061
Autor:
Tatsunami, Yuki, Taki, Masato
Multi-head-self-attention (MHSA)-equipped models have achieved notable performance in computer vision. Their computational complexity is proportional to quadratic numbers of pixels in input feature maps, resulting in slow processing, especially when
Externí odkaz:
http://arxiv.org/abs/2303.03932
Autor:
Ishikawa, Shin-nosuke, Todo, Masato, Taki, Masato, Uchiyama, Yasunobu, Matsunaga, Kazunari, Lin, Peihsuan, Ogihara, Taiki, Yasui, Masao
We present a method of explainable artificial intelligence (XAI), "What I Know (WIK)", to provide additional information to verify the reliability of a deep learning model by showing an example of an instance in a training dataset that is similar to
Externí odkaz:
http://arxiv.org/abs/2302.01526
Autor:
Tatsunami, Yuki, Taki, Masato
In recent computer vision research, the advent of the Vision Transformer (ViT) has rapidly revolutionized various architectural design efforts: ViT achieved state-of-the-art image classification performance using self-attention found in natural langu
Externí odkaz:
http://arxiv.org/abs/2205.01972
Application of QUBO solver using black-box optimization to structural design for resonance avoidance
Publikováno v:
Sci Rep 12, 12143 (2022)
Quadratic unconstrained binary optimization (QUBO) solvers can be applied to design an optimal structure to avoid resonance. QUBO algorithms that work on a classical or quantum device have succeeded in some industrial applications. However, their app
Externí odkaz:
http://arxiv.org/abs/2204.04906
Deep neural networks (DNNs) can accurately decode task-related information from brain activations. However, because of the nonlinearity of the DNN, the decisions made by DNNs are hardly interpretable. One of the promising approaches for explaining su
Externí odkaz:
http://arxiv.org/abs/2110.14927
Autor:
Tatsunami, Yuki, Taki, Masato
For the past ten years, CNN has reigned supreme in the world of computer vision, but recently, Transformer has been on the rise. However, the quadratic computational cost of self-attention has become a serious problem in practice applications. There
Externí odkaz:
http://arxiv.org/abs/2108.04384