Accelerate the Execution of Graph Processing Using GPU
Autor: | Sandip M. Walunj, Shweta Nitin Aher |
---|---|
Rok vydání: | 2018 |
Předmět: |
020203 distributed computing
Computer science Programming complexity Graphics processing unit 020207 software engineering 02 engineering and technology Parallel computing CUDA Shared memory CUDA Pinned memory 0202 electrical engineering electronic engineering information engineering Graph (abstract data type) Dijkstra's algorithm MathematicsofComputing_DISCRETEMATHEMATICS Data transmission |
Zdroj: | Information and Communication Technology for Intelligent Systems ISBN: 9789811317415 |
DOI: | 10.1007/978-981-13-1742-2_13 |
Popis: | Graph data structure is a collection of vertices and edges. Graph is utilized to model objects in social network and web graph. In practical computing many applications are work with large-scale graphs. Large graphs are composed of wide range vertices with billions of edges. It became challenging to process these large graphs. The Graphics Processing Unit (GPU) is an electronic circuit which is used to increase the performance. GPU used with various graph algorithms for faster the execution of large graph. However, it is difficult to process large graph due to millions of vertices, edges and irregularities of graph structures. Pregel and Medusa are programming frameworks which was developed to process large graph. Pregel framework works in iterations. It was developed to solve the graph problem in parallel computing. Medusa framework provides API for ease of programming and hides the programming complexity. However, these systems have a problem of irregular access to memory and load imbalance. To simplify graph processing problem, the proposed system will use GPU with Shortest Path algorithm. SSSP, APSP, and BFS algorithms are used to handle large graphs. The proposed system will use GPU’s shared memory for great performance and less computing time. Use of shared memory will help to resolve the problem of irregular memory access. To minimize the data transfer time among CPU and GPU system will use pinned memory and by batching many small transfer into single transfer. |
Databáze: | OpenAIRE |
Externí odkaz: |