Learning to Detect Event Sequences in Surveillance Streams at Very Low Frame Rate.

Autor: Lombardi, Paolo, Versino, Cristina
Zdroj: Machine Learning for Vision-based Motion Analysis; 2011, p117-144, 28p
Abstrakt: Some camera surveillance systems are designed to be autonomous–both from the energy and storage points of view. Autonomy allows operation in environments where wiring cameras for power and data transmission is not feasible. In these contexts, for cameras to work unattended over long periods of time requires choosing a low frame rate to match the speed of the process to be supervised while minimizing energy and storage usage. The result of surveillance is a large stream of images acquired sparsely over time with limited visual continuity from one frame to the other. Reviewing these images to detect events of interest requires techniques that do not assume traceability of objects by visual similarity. When the process surveyed shows recurrent patterns of events, as it is often the case for industrial settings, other possibilities open up. Since images are time-stamped, techniques which use temporal data can help detecting events. This contribution presents an image review tool that combines a scene change detector (SCD) with a temporal filter. The temporal filter learns to recognize relevant SCD events by their time distribution on the image stream. Learning is supported by image annotations provided by end-users during past reviews. The concept is tested on a benchmark of real surveillance images stemming from a nuclear safeguards context. Experimental results show that the combined SCD-temporal filter significantly reduces the workload necessary to detect safeguards-relevant events in large image streams. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index