Local scene flow by tracking in intensity and depth

Autor: Julian Quiroga, Frédéric Devernay, James L. Crowley
Přispěvatelé: Perception, recognition and integration for observation of activity (PRIMA), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS), Departamento de Electrónica, Pontificia Universidad Javeriana, Pontificia Universidad Javeriana (PUJ)
Jazyk: angličtina
Rok vydání: 2014
Předmět:
Motion analysis
Computer science
Optical flow
ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION
02 engineering and technology
3D motion estimation
Tracking (particle physics)
01 natural sciences
010104 statistics & probability
Image warping
Motion estimation
0202 electrical engineering
electronic engineering
information engineering

Media Technology
Computer vision
0101 mathematics
Electrical and Electronic Engineering
ComputingMethodologies_COMPUTERGRAPHICS
Pixel
business.industry
[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]
Image tracking
Brightness consistency
Motion vector
Depth data
Motion field
Flow (mathematics)
Locally-rigid motion
Signal Processing
020201 artificial intelligence & image processing
Scene flow
Computer Vision and Pattern Recognition
Artificial intelligence
business
Zdroj: Journal of Visual Communication and Image Representation
Journal of Visual Communication and Image Representation, 2014, 25 (1), pp.98-107. ⟨10.1016/j.jvcir.2013.03.018⟩
Journal of Visual Communication and Image Representation, Elsevier, 2014, 25 (1), pp.98-107. ⟨10.1016/j.jvcir.2013.03.018⟩
ISSN: 1047-3203
1095-9076
DOI: 10.1016/j.jvcir.2013.03.018⟩
Popis: We propose a method to compute local scene flow by tracking in intensity and depth.We propose a pixel motion model to constrain the 3D motion vector on 2D.We extend the Lucas-Kanade framework to work with intensity and depth data.Throughout some experiments we demonstrated the validity of our method.We simultaneously solve for the 2D tracking and for the local scene flow. The scene flow describes the motion of each 3D point between two time steps. With the arrival of new depth sensors, as the Microsoft Kinect, it is now possible to compute scene flow with a single camera, with promising repercussion in a wide range of computer vision scenarios. We propose a novel method to compute a local scene flow by tracking in a Lucas-Kanade framework. Scene flow is estimated using a pair of aligned intensity and depth images but rather than computing a dense scene flow as in most previous methods, we get a set of 3D motion vectors by tracking surface patches. Assuming a 3D local rigidity of the scene, we propose a rigid translation flow model that allows solving directly for the scene flow by constraining the 3D motion field both in intensity and depth data. In our experimentation we achieve very encouraging results. Since this approach solves simultaneously for the 2D tracking and for the scene flow, it can be used for motion analysis in existing 2D tracking based methods or to define scene flow descriptors.
Databáze: OpenAIRE