Popis: |
Driver motion recognition is a key factor in ensuring the safety of driving systems. This paper presents a novel system for learning and predicting driver motions, along with an event-based (720 × 720) dataset, N-DriverMotion, newly collected to train a neuromorphic vision system. The system includes an event-based camera that generates a driver motion dataset representing spike inputs and efficient spiking neural networks (SNNs) that are effective in training and predicting the driver's gestures. The event dataset consists of 13 driver motion categories classified by direction (front, side), illumination (bright, moderate, dark), and participant. A novel optimized four-layer convolutional spiking neural network (CSNN) was trained directly without any time-consuming preprocessing. This enables efficient adaptation to energy- and resource-constrained on-device SNNs for real-time inference on high-resolution event-based streams. Compared to recent gesture recognition systems adopting neural networks for vision processing, the proposed neuromorphic vision system achieves competitive accuracy of 94.04% in a 13-class classification task, and 97.24% in an unexpected abnormal driver motion classification task with the CSNN architecture. Additionally, when deployed to Intel Loihi 2 neuromorphic chips, the energy-delay product (EDP) of the model achieved 20,721 times more efficient than that of a non-edge GPU, and 541 times more efficient than edge-purpose GPU. Our proposed CSNN and the dataset can be used to develop safer and more efficient driver-monitoring systems for autonomous vehicles or edge devices requiring an efficient neural network architecture. |