Autor: |
Stephan Algermissen, Max Hörnlein |
Jazyk: |
angličtina |
Rok vydání: |
2021 |
Předmět: |
|
Zdroj: |
Applied Mechanics, Vol 2, Iss 2, Pp 257-273 (2021) |
Druh dokumentu: |
article |
ISSN: |
2673-3161 |
DOI: |
10.3390/applmech2020016 |
Popis: |
Human gait is very individual and it may serve as biometric to identify people in camera recordings. Comparable results can be achieved while using the acoustic signature of human footstep sounds. This acoustic solution offers the opportunity of less installation space and the use of cost-efficient microphones when compared to visual system. In this paper, a method for person identification based on footstep sounds is proposed. First, step sounds are isolated from microphone recordings and separated into 500 ms samples. The samples are transformed with a sliding window into mel-frequency cepstral coefficients (MFCC). The result is represented as an image that serves as input to a convolutional neural network (CNN). The dataset for training and validating the CNN is recorded with five subjects in the acoustic lab of DLR. These experiments identify a total number of 1125 steps. The validation of the CNN reveals a minimum F1-score of 0.94 for all five classes and an accuracy of 0.98. The Grad-CAM method is applied to visualize the background of its decision in order to verify the functionality of the proposed CNN. Subsequently, two challenges for practical implementations, noise and different footwear, are discussed using experimental data. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|