Parallelization of the Array Method Using OpenMP

Autor: Apolinar Velarde Martínez
Rok vydání: 2021
Předmět:
Zdroj: Advances in Soft Computing ISBN: 9783030898199
MICAI (2)
DOI: 10.1007/978-3-030-89820-5_24
Popis: Shared memory programming and distributed memory programming, are the most prominent ways of parallelize applications requiring high processing times and large amounts of storage in High Performance Computing (HPC) systems; parallel applications can be represented by Parallel Task Graphs (PTG) using Directed Acyclic Graphs (DAGs). The scheduling of PTGs in HPCS is considered a NP-Complete combinatorial problem that requires large amounts of storage and long processing times. Heuristic methods and sequential programming languages have been proposed to address this problem. In the open access paper: Scheduling in Heterogeneous Distributed Computing Systems Based on Internal Structure of Parallel Tasks Graphs with Meta-Heuristics, the Array Method is presented, this method optimizes the use of Processing Elements (PE) in a HPCS and improves response times in scheduling and mapping resource with the use of the Univariate Marginal Distribution Algorithm (UMDA); Array Method uses the internal characteristics of PTGs to make task scheduling; this method was programmed in the C language in sequential form, analyzed and tested with the use of algorithms for the generation of synthetic workloads and DAGs of real applications. Considering the great benefits of parallel software, this research work presents the Array Method using parallel programming with OpenMP. The results of the experiments show an acceleration in the response times of parallel programming compared to sequential programming when evaluating three metrics: waiting time, makespan and quality of assignments.
Databáze: OpenAIRE