Ambitious Data Science Can Be Painless
Autor: | David L. Donoho, Hatef Monajemi, Percy Liang, Eric Jonas, Riccardo Murri, Victoria Stodden |
---|---|
Rok vydání: | 2019 |
Předmět: |
0301 basic medicine
FOS: Computer and information sciences Cloud resources 010504 meteorology & atmospheric sciences Scope (project management) business.industry Computer science Scale (chemistry) Cloud computing 01 natural sciences Data science 03 medical and health sciences 030104 developmental biology Documentation Software Computer Science - Distributed Parallel and Cluster Computing Experimental work Enhanced Data Rates for GSM Evolution Distributed Parallel and Cluster Computing (cs.DC) business 0105 earth and related environmental sciences |
DOI: | 10.48550/arxiv.1901.08705 |
Popis: | Modern data science research can involve massive computational experimentation; an ambitious PhD in computational fields may do experiments consuming several million CPU hours. Traditional computing practices, in which researchers use laptops or shared campus-resident resources, are inadequate for experiments at the massive scale and varied scope that we now see in data science. On the other hand, modern cloud computing promises seemingly unlimited computational resources that can be custom configured, and seems to offer a powerful new venue for ambitious data-driven science. Exploiting the cloud fully, the amount of work that could be completed in a fixed amount of time can expand by several orders of magnitude. As potentially powerful as cloud-based experimentation may be in the abstract, it has not yet become a standard option for researchers in many academic disciplines. The prospect of actually conducting massive computational experiments in today's cloud systems confronts the potential user with daunting challenges. Leading considerations include: (i) the seeming complexity of today's cloud computing interface, (ii) the difficulty of executing an overwhelmingly large number of jobs, and (iii) the difficulty of monitoring and combining a massive collection of separate results. Starting a massive experiment `bare-handed' seems therefore highly problematic and prone to rapid `researcher burn out'. New software stacks are emerging that render massive cloud experiments relatively painless. Such stacks simplify experimentation by systematizing experiment definition, automating distribution and management of tasks, and allowing easy harvesting of results and documentation. In this article, we discuss several painless computing stacks that abstract away the difficulties of massive experimentation, thereby allowing a proliferation of ambitious experiments for scientific discovery. Comment: Submitted to Harvard Data Science Review |
Databáze: | OpenAIRE |
Externí odkaz: |