Insights into Batch Selection for Event-Camera Motion Estimation.

Autor: Valerdi JL; Event-Driven Perception for Robotics, Istituto Italiano di Tecnologia, 16163 Genova, Italy., Bartolozzi C; Event-Driven Perception for Robotics, Istituto Italiano di Tecnologia, 16163 Genova, Italy., Glover A; Event-Driven Perception for Robotics, Istituto Italiano di Tecnologia, 16163 Genova, Italy.
Jazyk: angličtina
Zdroj: Sensors (Basel, Switzerland) [Sensors (Basel)] 2023 Apr 03; Vol. 23 (7). Date of Electronic Publication: 2023 Apr 03.
DOI: 10.3390/s23073699
Abstrakt: Event cameras measure scene changes with high temporal resolutions, making them well-suited for visual motion estimation. The activation of pixels results in an asynchronous stream of digital data (events), which rolls continuously over time without the discrete temporal boundaries typical of frame-based cameras (where a data packet or frame is emitted at a fixed temporal rate). As such, it is not trivial to define a priori how to group/accumulate events in a way that is sufficient for computation. The suitable number of events can greatly vary for different environments, motion patterns, and tasks. In this paper, we use neural networks for rotational motion estimation as a scenario to investigate the appropriate selection of event batches to populate input tensors. Our results show that batch selection has a large impact on the results: training should be performed on a wide variety of different batches, regardless of the batch selection method; a simple fixed-time window is a good choice for inference with respect to fixed-count batches, and it also demonstrates comparable performance to more complex methods. Our initial hypothesis that a minimal amount of events is required to estimate motion (as in contrast maximization) is not valid when estimating motion with a neural network.
Databáze: MEDLINE
Nepřihlášeným uživatelům se plný text nezobrazuje