Sound Can Help Us See More Clearly
Autor: | Yongsheng Li, Tengfei Tu, Hua Zhang, Jishuai Li, Zhengping Jin, Qiaoyan Wen |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2022 |
Předmět: |
Chemical technology
ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION two-stream network TP1-1185 Biochemistry Atomic and Molecular Physics and Optics Article computer vision Analytical Chemistry Pattern Recognition Automated Sound sound texture Neural Networks Computer Electrical and Electronic Engineering Instrumentation |
Zdroj: | Sensors, Vol 22, Iss 599, p 599 (2022) Sensors (Basel, Switzerland) Sensors; Volume 22; Issue 2; Pages: 599 |
ISSN: | 1424-8220 |
Popis: | In the field of video action classification, existing network frameworks often only use video frames as input. When the object involved in the action does not appear in a prominent position in the video frame, the network cannot accurately classify it. We introduce a new neural network structure that uses sound to assist in processing such tasks. The original sound wave is converted into sound texture as the input of the network. Furthermore, in order to use the rich modal information (images and sound) in the video, we designed and used a two-stream frame. In this work, we assume that sound data can be used to solve motion recognition tasks. To demonstrate this, we designed a neural network based on sound texture to perform video action classification tasks. Then, we fuse this network with a deep neural network that uses continuous video frames to construct a two-stream network, which is called A-IN. Finally, in the kinetics dataset, we use our proposed A-IN to compare with the image-only network. The experimental results show that the recognition accuracy of the two-stream neural network model with uesed sound data features is increased by 7.6% compared with the network using video frames. This proves that the rational use of the rich information in the video can improve the classification effect. |
Databáze: | OpenAIRE |
Externí odkaz: |