Autor: |
Lee A. Barford, Rui Wu, Frederick C. HarrisJr., A. Grant Schissler, Xiang Li |
Rok vydání: |
2019 |
Předmět: |
|
Zdroj: |
16th International Conference on Information Technology-New Generations (ITNG 2019) ISBN: 9783030140694 |
DOI: |
10.1007/978-3-030-14070-0_46 |
Popis: |
Many complex real world systems can be represented as correlated high dimensional vectors (up to 20,501 in this paper). While univariate analysis is simpler, it does not account for correlations between variables. This omission often misleads researchers by producing results based on unrealistic assumptions. As the generation of large correlated data sets is time consuming and resource heavy, we propose a graphical processing unit (GPU) accelerated version of the established NORmal To Anything (NORTA) algorithm. NORTA involves many independent and parallelizeable operations—sparking our interest to deploy a Compute Unified Device Architecture (CUDA) implementation for use on Nvidia GPUs. NORTA begins by simulating independent standard normal vectors and transforms them into correlated vectors with arbitrary marginal distributions (heterogenous random variables). In our benchmark studies using a Tesla Nvidia card, the speedup obtained over a sequential NORTA coded in R (R-NORTA) peaks at 19.6× for 2000 simulated random vectors with dimension 5000. Moreover, the speedup obtained for GPU-NORTA over a commonly used R package for multivariate simulation (the copula package) was 2093× for 2000 simulated random vectors with dimension 20,501. Our study serves as a preliminary proof of concept with opportunities for further optimization, implementation, and additional features. |
Databáze: |
OpenAIRE |
Externí odkaz: |
|