Autor: |
Bent, John, Faibish, Sorin, Ahrens, Jim, Grider, Gary, Patchett, John, Tzelnic, Percy, Woodring, Jon |
Zdroj: |
012 IEEE 28th Symposium on Mass Storage Systems & Technologies (MSST); 1/ 1/2012, p1-5, 5p |
Abstrakt: |
In the petascale era, the storage stack used by the extreme scale high performance computing community is fairly homogeneous across sites. On the compute edge of the stack, file system clients or IO forwarding services direct IO over an interconnect network to a relatively small set of IO nodes. These nodes forward the requests over a secondary storage network to a spindle-based parallel file system. Unfortunately, this architecture will become unviable in the exascale era. As the density growth of disks continues to outpace increases in their rotational speeds, disks are becoming increasingly cost-effective for capacity but decreasingly so for bandwidth. Fortunately, new storage media such as solid state devices are filling this gap; although not cost-effective for capacity, they are so for performance. This suggests that the storage stack at exascale will incorporate solid state storage between the compute nodes and the parallel file systems. There are three natural places into which to position this new storage layer: within the compute nodes, the IO nodes, or the parallel file system. In this paper, we argue that the IO nodes are the appropriate location for HPC workloads and show results from a prototype system that we have built accordingly. Running a pipeline of computational simulation and visualization, we show that our prototype system reduces total time to completion by up to 30%. [ABSTRACT FROM PUBLISHER] |
Databáze: |
Complementary Index |
Externí odkaz: |
|