PROVABLY CONSISTENT DISTRIBUTED DELAUNAY TRIANGULATION

Autor: Murat Yirci, Pooran Memari, Mathieu Brédif, Laurent Caraffa
Přispěvatelé: Laboratoire sciences et technologies de l'information géographique (LaSTIG), Ecole des Ingénieurs de la Ville de Paris (EIVP)-École nationale des sciences géographiques (ENSG), Institut National de l'Information Géographique et Forestière [IGN] (IGN)-Université Gustave Eiffel-Institut National de l'Information Géographique et Forestière [IGN] (IGN)-Université Gustave Eiffel, Laboratoire d'informatique de l'École polytechnique [Palaiseau] (LIX), Centre National de la Recherche Scientifique (CNRS)-École polytechnique (X)
Rok vydání: 2020
Předmět:
Zdroj: International Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS Annals)
ISPRS Congress
ISPRS Congress, Aug 2020, Nice, France. pp.195-202, ⟨10.5194/isprs-annals-V-2-2020-195-2020⟩
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol V-2-2020, Pp 195-202 (2020)
ISSN: 2194-9050
DOI: 10.5194/isprs-annals-v-2-2020-195-2020
Popis: This paper deals with the distributed computation of Delaunay triangulations of massive point sets, mainly motivated by the needs of a scalable out-of-core surface reconstruction workflow from massive urban LIDAR datasets. Such a data often corresponds to a huge point cloud represented through a set of tiles of relatively homogeneous point sizes. This will be the input of our algorithm which will naturally partition this data across multiple processing elements. The distributed computation and communication between processing elements is orchestrated efficiently through an uncentralized model to represent, manage and locally construct the triangulation corresponding to each tile. Initially inspired by the star splaying approach, we review the Tile& Merge algorithm for computing Distributed Delaunay Triangulations on the cloud, provide a theoretical proof of correctness of this algorithm, and analyse the performance of our Spark implementation in terms of speedup and strong scaling in both synthetic and real use case datasets. A HPC implementation (e.g. using MPI), left for future work, would benefit from its more efficient message passing paradigm but lose the robustness and failure resilience of our Spark approach.
Databáze: OpenAIRE