Real-Time Gesture Interface Based on Event-Driven Processing From Stereo Silicon Retinas
Autor: | Chang-Woo Shin, Jun Haeng Lee, Hyunsurk Ryu, Michael Pfeiffer, Paul K. J. Park, Byung Chang Kang, Tobi Delbruck |
---|---|
Rok vydání: | 2014 |
Předmět: |
Silicon
Computer Networks and Communications Computer science Feature vector ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Retina Pattern Recognition Automated Computer Systems Artificial Intelligence medicine Humans Computer vision Hidden Markov model Spiking neural network Gestures Event (computing) business.industry Computer Science Applications medicine.anatomical_structure Neuromorphic engineering Gesture recognition Brain-Computer Interfaces Pattern recognition (psychology) Artificial intelligence business Photic Stimulation Software Gesture |
Zdroj: | IEEE Transactions on Neural Networks and Learning Systems. 25:2250-2263 |
ISSN: | 2162-2388 2162-237X |
DOI: | 10.1109/tnnls.2014.2308551 |
Popis: | We propose a real-time hand gesture interface based on combining a stereo pair of biologically inspired event-based dynamic vision sensor (DVS) silicon retinas with neuromorphic event-driven postprocessing. Compared with conventional vision or 3-D sensors, the use of DVSs, which output asynchronous and sparse events in response to motion, eliminates the need to extract movements from sequences of video frames, and allows significantly faster and more energy-efficient processing. In addition, the rate of input events depends on the observed movements, and thus provides an additional cue for solving the gesture spotting problem, i.e., finding the onsets and offsets of gestures. We propose a postprocessing framework based on spiking neural networks that can process the events received from the DVSs in real time, and provides an architecture for future implementation in neuromorphic hardware devices. The motion trajectories of moving hands are detected by spatiotemporally correlating the stereoscopically verged asynchronous events from the DVSs by using leaky integrate-and-fire (LIF) neurons. Adaptive thresholds of the LIF neurons achieve the segmentation of trajectories, which are then translated into discrete and finite feature vectors. The feature vectors are classified with hidden Markov models, using a separate Gaussian mixture model for spotting irrelevant transition gestures. The disparity information from stereovision is used to adapt LIF neuron parameters to achieve recognition invariant of the distance of the user to the sensor, and also helps to filter out movements in the background of the user. Exploiting the high dynamic range of DVSs, furthermore, allows gesture recognition over a 60-dB range of scene illuminance. The system achieves recognition rates well over 90% under a variety of variable conditions with static and dynamic backgrounds with naïve users. |
Databáze: | OpenAIRE |
Externí odkaz: |