Comparison of 3D Point Cloud Processing and CNN Prediction Based on RGBD Images for Bionic-eye’s Navigation

Autor: Wen-Nung Lie, Ananya Kuasakunrungroj, Toshiaki Kondo, Sitapa Rujikietgumjorn, Hirohiko Kaneko
Rok vydání: 2019
Předmět:
Zdroj: 2019 4th International Conference on Information Technology (InCIT).
DOI: 10.1109/incit.2019.8912066
Popis: Bionic Eye is the device to restore the vision for the blind. This device is developed based on vision partway stimulation knowledge. However, because of the bionic eye stimulation hardware limitation, possible stimulation pattern are low-resolution images. In order to make those low-resolution images to be useful for blinds, image processing is one of the major challenges in bionic eye development. According to real-life applications, RGB-D images can be applied to enhance object detection for a bionic eye. Moreover, the depth information can be used to generate a danger map for navigating the blind to the safe partway and avoid the obstacle that might be harmful to them. This paper focuses on comparing the danger map results of several RGB-D processing methods on the bionic eye walkway navigation applications. The methods under comparison include 3D point cloud processing and four models of CNN (convolution neural network) semantic segmentation based on RGB-D images. The images from SUN RGB-D scene understanding benchmark suite are used as input images in this experiment. The result of this research shows that each method has its own advantage. However, the convolution neural network seems to be significantly better in accuracy, precision, and recall, when comparing with other resulting images.
Databáze: OpenAIRE