Identification of Complex Mixtures for Raman Spectroscopy Using a Novel Scheme Based on a New Multi-Label Deep Neural Network
Autor: | Pronthep Pipitsunthonsan, Liangrui Pan, Chalongrat Daengngam, Mitchai Chongcheawchamnan, Sittiporn Channumsin, Suwat Sreesawet |
---|---|
Rok vydání: | 2021 |
Předmět: |
Artificial neural network
Noise measurement Computer science business.industry 010401 analytical chemistry Feature extraction Wavelet transform Pattern recognition White noise 01 natural sciences 0104 chemical sciences Feature (computer vision) Artificial intelligence Electrical and Electronic Engineering business Instrumentation Hamming code Continuous wavelet transform |
Zdroj: | IEEE Sensors Journal. 21:10834-10843 |
ISSN: | 2379-9153 1530-437X |
DOI: | 10.1109/jsen.2021.3059849 |
Popis: | With a noisy environment caused by fluorescence and additive white noise as well as complicated spectra fingerprints, the identification of complex mixture materials remains a significant challenge in Raman spectroscopy application. This paper proposes a new scheme based on a constant wavelet transform (CWT) and a deep network for classifying complex mixture. The scheme first transforms the noisy Raman spectra to a two-dimensional scale map using CWT. Then, the multi-label deep neural network model (MLDNN) is simulated for material classification. The proposed model accelerates the feature extraction and expands the feature graph using the global averaging pooling layer. The Sigmoid function is implemented in the last layer of the model. The MLDNN model was trained, validated, and tested with data collected from the samples prepared from palm oil substances. During the training and validating process, data augmentation is applied to overcome the imbalance of data and enrich Raman spectra’s diversity. From the test results, MLDNN successfully surpassed VGG16, VGG19, ResNet50, DenseNet121, InceptionResNetV2, MobileNet50 in average precision, F1 macro averaging and F1 micro averaging, which were 0.9900, 0.9874 and 0.9874, respectively. In Hamming loss, one error, coverage, ranking loss, the MLDNN model reached 0.014, 0.020, 1.738 and 0.000, respectively. The values of other models in Hamming loss, one error, coverage, ranking loss are higher than those of MLDNN. In the comparison of AUC value, MLDNN outperforms other models on each label. The average detection time of the MLDNN is 5.3123 s, which is much faster than those of VGG16, VGG19, ResNet50, DenseNet121, InceptionResNetV2 and MobileNetV2, which are 7.1245, 7.5046, 8.6300, 12.3294, 16.1131 and 6.6451, respectively. |
Databáze: | OpenAIRE |
Externí odkaz: |