SIFT implementation based on GPU
Autor: | Xiao-feng Wei, Ze-xun Geng, Chen Shen, Chao Jiang |
---|---|
Rok vydání: | 2013 |
Předmět: |
business.industry
Computer science Feature vector OpenGL ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Scale-invariant feature transform Image processing Rendering (computer graphics) Computer graphics Computer vision Central processing unit Shading language Artificial intelligence business |
Zdroj: | SPIE Proceedings. |
ISSN: | 0277-786X |
DOI: | 10.1117/12.2031661 |
Popis: | —Image matching is the core research topics of digital photogrammetry and computer vision. SIFT(Scale-Invariant Feature Transform) algorithm is a feature matching algorithm based on local invariant features which is proposed by Lowe at 1999, SIFT features are invariant to image rotation and scaling, even partially invariant to change in 3D camera viewpoint and illumination. They are well localized in both the spatial and frequency domains, reducing the probability of disruption by occlusion, clutter, or noise. So the algorithm has a widely used in image matching and 3D reconstruction based on stereo image. Traditional SIFT algorithm's implementation and optimization are generally for CPU. Due to the large numbers of extracted features(even if only several objects can also extract large numbers of SIFT feature), high-dimensional of the feature vector(usually a 128-dimensional SIFT feature vector), and the complexity for the SIFT algorithm, therefore the SIFT algorithm on the CPU processing speed is slow, hard to fulfil the real-time requirements. Programmable Graphic Process United(PGPU) is commonly used by the current computer graphics as a dedicated device for image processing. The development experience of recent years shows that a high-performance GPU, which can be achieved 10 times single-precision floating-point processing performanceone compared with the same time of a high-performance desktop CPU, simultaneity the GPU's memory bandwidth is up to five times compared with the same period desktop platform. Provide the same computing power, the GPU's cost and power consumption should be less than the CPU-based system. At the same time, due to the parallel nature of graphics rendering and image processing, so GPU-accelerated image processing become to an efficient solution for some algorithm which have requirements for real-time. In this paper, we realized the algorithm by OpenGL shader language and compare to the results which realized by CPU. Experiments demonstrate that the efficiency of GPU-based SIFT algorithm are significantly improved. |
Databáze: | OpenAIRE |
Externí odkaz: |