Skewness-Based Partitioning in SpatialHadoop
Autor: | Sara Migliorini, Ahmed Eldawy, Alberto Belussi |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
Computer science
Heuristic (computer science) Geography Planning and Development Big data 0211 other engineering and technologies lcsh:G1-922 SpatialHadoop 02 engineering and technology computer.software_genre Regular grid partitioning 0202 electrical engineering electronic engineering information engineering Earth and Planetary Sciences (miscellaneous) Quadtree MapReduce Computers in Earth Sciences Spatial analysis 021101 geological & geomatics engineering business.industry skewed data BigData Function (mathematics) Grid Skewness 020201 artificial intelligence & image processing Data mining business computer lcsh:Geography (General) |
Zdroj: | ISPRS International Journal of Geo-Information Volume 9 Issue 4 ISPRS International Journal of Geo-Information, Vol 9, Iss 201, p 201 (2020) |
ISSN: | 2220-9964 |
DOI: | 10.3390/ijgi9040201 |
Popis: | In recent years, several extensions of the Hadoop system have been proposed for dealing with spatial data. SpatialHadoop belongs to this group of projects and includes some MapReduce implementations of spatial operators, like range queries and spatial join. the MapReduce paradigm is based on the fundamental principle that a task can be parallelized by partitioning data into chunks and performing the same operation on them, (map phase), eventually combining the partial results at the end (reduce phase). Thus, the applied partitioning technique can tremendously affect the performance of a parallel execution, since it is the key point for obtaining balanced map tasks and exploiting the parallelism as much as possible. When uniformly distributed datasets are considered, this goal can be easily obtained by using a regular grid covering the whole reference space for partitioning the geometries of the input dataset conversely, with skewed distributed datasets, this might not be the right choice and other techniques have to be applied. for instance, SpatialHadoop can produce a global index also by means of a Quadtree-based grid or an Rtree-based grid, which in turn are more expensive index structures to build. This paper proposes a technique based on both a box counting function and a heuristic, rooted on theoretical properties and experimental observations, for detecting the degree of skewness of an input spatial dataset and then deciding which partitioning technique to apply in order to improve as much as possible the performance of subsequent operations. Experiments on both synthetic and real datasets are presented to confirm the effectiveness of the proposed approach. |
Databáze: | OpenAIRE |
Externí odkaz: |