Number of necessary training examples for Neural Networks with different number of trainable parameters.
Autor: | Götz TI; Fraunhofer IIS, Fraunhofer Institute for Integrated Circuits IIS, Erlangen, Germany.; Department of Internal Medicine 3, Rheumatology & Immunology, University Hospital Erlangen, Erlangen, Germany.; Department of Industrial Engineering and Health, Technical University of Applied Sciences Amberg-Weiden, Weiden, Germany., Göb S; Fraunhofer IIS, Fraunhofer Institute for Integrated Circuits IIS, Erlangen, Germany., Sawant S; Fraunhofer IIS, Fraunhofer Institute for Integrated Circuits IIS, Erlangen, Germany., Erick XF; Fraunhofer IIS, Fraunhofer Institute for Integrated Circuits IIS, Erlangen, Germany., Wittenberg T; Fraunhofer IIS, Fraunhofer Institute for Integrated Circuits IIS, Erlangen, Germany., Schmidkonz C; Clinic of Nuclear Medicine, University Hospital Erlangen, 91054 Erlangen, Germany.; Department of Industrial Engineering and Health, Technical University of Applied Sciences Amberg-Weiden, Weiden, Germany., Tomé AM; IEETA, DETI, Universidade de Aveiro, 3810-193 Aveiro, Portugal., Lang EW; CIML Group, Biophysics, University of Regensburg, 93040 Regensburg, Germany., Ramming A; Department of Internal Medicine 3, Rheumatology & Immunology, University Hospital Erlangen, Erlangen, Germany. |
---|---|
Jazyk: | angličtina |
Zdroj: | Journal of pathology informatics [J Pathol Inform] 2022 Jul 06; Vol. 13, pp. 100114. Date of Electronic Publication: 2022 Jul 06 (Print Publication: 2022). |
DOI: | 10.1016/j.jpi.2022.100114 |
Abstrakt: | In this work, the network complexity should be reduced with a concomitant reduction in the number of necessary training examples. The focus thus was on the dependence of proper evaluation metrics on the number of adjustable parameters of the considered deep neural network. The used data set encompassed Hematoxylin and Eosin ( H & E ) colored cell images provided by various clinics. We used a deep convolutional neural network to get the relation between a model's complexity, its concomitant set of parameters, and the size of the training sample necessary to achieve a certain classification accuracy. The complexity of the deep neural networks was reduced by pruning a certain amount of filters in the network. As expected, the unpruned neural network showed best performance. The network with the highest number of trainable parameter achieved, within the estimated standard error of the optimized cross-entropy loss, best results up to 30 % pruning. Strongly pruned networks are highly viable and the classification accuracy declines quickly with decreasing number of training patterns. However, up to a pruning ratio of 40 % , we found a comparable performance of pruned and unpruned deep convolutional neural networks (DCNN) and densely connected convolutional networks (DCCN). (© 2022 Published by Elsevier Inc. on behalf of Association for Pathology Informatics.) |
Databáze: | MEDLINE |
Externí odkaz: |