Autor: |
Kovatch P; Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, USA., Gai L; Office of the Dean, Icahn School of Medicine at Mount Sinai, New York, USA., Cho HM; Office of the Dean, Icahn School of Medicine at Mount Sinai, New York, USA., Fluder E; Office of the Dean, Icahn School of Medicine at Mount Sinai, New York, USA., Jiang D; Office of the Dean, Icahn School of Medicine at Mount Sinai, New York, USA. |
Jazyk: |
angličtina |
Zdroj: |
IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum : [proceedings]. IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum [IEEE Int Symp Parallel Distrib Process Workshops Phd Forum] 2020 May; Vol. 2020, pp. 183-192. Date of Electronic Publication: 2020 Jul 28. |
DOI: |
10.1109/ipdpsw50202.2020.00040 |
Abstrakt: |
The productivity of computational biologists is limited by the speed of their workflows and subsequent overall job throughput. Because most biomedical researchers are focused on better understanding scientific phenomena rather than developing and optimizing code, a computing and data system implemented in an adventitious and/or non-optimized manner can impede the progress of scientific discovery. In our experience, most computational, life-science applications do not generally leverage the full capabilities of high-performance computing, so tuning a system for these applications is especially critical. To optimize a system effectively, systems staff must understand the effects of the applications on the system. Effective stewardship of the system includes an analysis of the impact of the applications on the compute cores, file system, resource manager and queuing policies. The resulting improved system design, and enactment of a sustainability plan, help to enable a long-term resource for productive computational and data science. We present a case study of a typical biomedical computational workload at a leading academic medical center supporting over $100 million per year in computational biology research. Over the past eight years, our high-performance computing system has enabled over 900 biomedical publications in four major areas: genetics and population analysis, gene expression, machine learning, and structural and chemical biology. We have upgraded the system several times in response to trends, actual usage, and user feedback. Major components crucial to this evolution include scheduling structure and policies, memory size, compute type and speed, parallel file system capabilities, and deployment of cloud technologies. We evolved a 70 teraflop machine to a 1.4 petaflop machine in seven years and grew our user base nearly 10-fold. For long-term stability and sustainability, we established a chargeback fee structure. Our overarching guiding principle for each progression has been to increase scientific throughput and enable enhanced scientific fidelity with minimal impact to existing user workflows or code. This highly-constrained system optimization has presented unique challenges, leading us to adopt new approaches to provide constructive pathways forward. We share our practical strategies resulting from our ongoing growth and assessments. |
Databáze: |
MEDLINE |
Externí odkaz: |
|