A Regularization Post Layer: An Additional Way How to Make Deep Neural Networks Robust
Autor: | Josef Psutka, Jan Zelinka, Daniel Soutner, Jan Vaněk |
---|---|
Rok vydání: | 2017 |
Předmět: | |
Zdroj: | Statistical Language and Speech Processing ISBN: 9783319684550 SLSP |
Popis: | Neural Networks (NNs) are prone to overfitting. Especially, the Deep Neural Networks in the cases where the training data are not abundant. There are several techniques which allow us to prevent the overfitting, e.g., L1/L2 regularization, unsupervised pre-training, early training stopping, dropout, bootstrapping or cross-validation models aggregation. In this paper, we proposed a regularization post-layer that may be combined with prior techniques, and it brings additional robustness to the NN. We trained the regularization post-layer in the cross-validation (CV) aggregation scenario: we used the CV held-out folds to train an additional neural network post-layer that boosts the network robustness. We have tested various post-layer topologies and compared results with other regularization techniques. As a benchmark task, we have selected the TIMIT phone recognition which is a well-known and still favorite task where the training data are limited, and the used regularization techniques play a key role. However, the regularization post-layer is a general method, and it may be employed in any classification task. |
Databáze: | OpenAIRE |
Externí odkaz: |