Real-time 3D human objects rendering based on multiple camera details
Autor: | Enkhtogtokh Togootogtokh, Shwu-Huey Yen, Wei-Chun Chang, W. G. C. W. Kumara, Timothy K. Shih, Hui-Huang Hsu |
---|---|
Rok vydání: | 2016 |
Předmět: |
Computer Networks and Communications
Computer science business.industry ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Point cloud 020207 software engineering 02 engineering and technology Virtual reality Rendering (computer graphics) Computer graphics Hardware and Architecture Computer graphics (images) 0202 electrical engineering electronic engineering information engineering Media Technology RGB color model 020201 artificial intelligence & image processing Computer vision Artificial intelligence business Software ComputingMethodologies_COMPUTERGRAPHICS |
Zdroj: | Multimedia Tools and Applications. 76:11687-11713 |
ISSN: | 1573-7721 1380-7501 |
Popis: | 3D model construction techniques using RGB-D information have been gaining a great attention of the researchers around the world in recent decades. The RGB-D sensor, Microsoft Kinect is widely used in many research fields, such as in computer vision, computer graphics, and human computer interaction, due to its capacity of providing color and depth information. This paper presents our research finding on calibrating information from several Kinects in order to construct a 3D model of a human object and to render texture captured from RGB camera. We used multiple Kinect sensors, which are interconnected in a network. High bit rate streams captured at each Kinect are first sent to a centralized PC for the processing. This even can be extended to a remote PC in the Internet. Main contributions of this work include calibration of the multiple Kinects, properly aligning point clouds generated from multiple Kinects, and generation of the 3D shape of the human objects. Experimental results demonstrate that the proposed method provides a better 3D model of the human object being captured. |
Databáze: | OpenAIRE |
Externí odkaz: |