Zobrazeno 1 - 10
of 32
pro vyhledávání: '"Kanai, Sekitoshi"'
Autor:
Kanai, Sekitoshi, Ida, Yasutoshi, Adachi, Kazuki, Uchida, Mihiro, Yoshida, Tsukasa, Yamaguchi, Shin'ya
This study investigates a method to evaluate time-series datasets in terms of the performance of deep neural networks (DNNs) with state space models (deep SSMs) trained on the dataset. SSMs have attracted attention as components inside DNNs to addres
Externí odkaz:
http://arxiv.org/abs/2408.16261
While fine-tuning is a de facto standard method for training deep neural networks, it still suffers from overfitting when using small target datasets. Previous methods improve fine-tuning performance by maintaining knowledge of the source datasets or
Externí odkaz:
http://arxiv.org/abs/2403.10097
Autor:
Suzuki, Satoshi, Yamaguchi, Shin'ya, Takeda, Shoichiro, Kanai, Sekitoshi, Makishima, Naoki, Ando, Atsushi, Masumura, Ryo
This paper addresses the tradeoff between standard accuracy on clean examples and robustness against adversarial examples in deep neural networks (DNNs). Although adversarial training (AT) improves robustness, it degrades the standard accuracy, thus
Externí odkaz:
http://arxiv.org/abs/2308.16454
This paper investigates methods for improving generative data augmentation for deep learning. Generative data augmentation leverages the synthetic samples produced by generative models as an additional dataset for classification with small dataset se
Externí odkaz:
http://arxiv.org/abs/2307.13899
Regularized discrete optimal transport (OT) is a powerful tool to measure the distance between two discrete distributions that have been constructed from data samples on two different domains. While it has a wide range of applications in machine lear
Externí odkaz:
http://arxiv.org/abs/2303.07597
Gate functions in recurrent models, such as an LSTM and GRU, play a central role in learning various time scales in modeling time series data by using a bounded activation function. However, it is difficult to train gates to capture extremely long ti
Externí odkaz:
http://arxiv.org/abs/2210.01348
Autor:
Kanai, Sekitoshi, Yamaguchi, Shin'ya, Yamada, Masanori, Takahashi, Hiroshi, Ohno, Kentaro, Ida, Yasutoshi
This paper proposes a new loss function for adversarial training. Since adversarial training has difficulties, e.g., necessity of high model capacity, focusing on important data points by weighting cross-entropy loss has attracted much attention. How
Externí odkaz:
http://arxiv.org/abs/2207.10283
Transfer learning is crucial in training deep neural networks on new target tasks. Current transfer learning methods always assume at least one of (i) source and target task label spaces overlap, (ii) source datasets are available, and (iii) target n
Externí odkaz:
http://arxiv.org/abs/2204.12833
Autor:
Yamaguchi, Shin'ya, Kanai, Sekitoshi
Generative adversarial networks built from deep convolutional neural networks (GANs) lack the ability to exactly replicate the high-frequency components of natural images. To alleviate this issue, we introduce two novel training techniques called fre
Externí odkaz:
http://arxiv.org/abs/2106.02343
Deep neural networks are vulnerable to adversarial attacks. Recent studies about adversarial robustness focus on the loss landscape in the parameter space since it is related to optimization and generalization performance. These studies conclude that
Externí odkaz:
http://arxiv.org/abs/2103.01400