Autor: |
Ahady Dolatsara H; Graduate School of Management, Clark University, Worcester, Massachusetts, USA., Chen YJ; Department of Mathematics, University of Dayton, Dayton, Ohio, USA., Leonard RD; Farmer School of Business, Miami University, Oxford, Ohio, USA., Megahed FM; Farmer School of Business, Miami University, Oxford, Ohio, USA., Jones-Farmer LA; Farmer School of Business, Miami University, Oxford, Ohio, USA. |
Jazyk: |
angličtina |
Zdroj: |
Big data [Big Data] 2023 Jun; Vol. 11 (3), pp. 199-214. Date of Electronic Publication: 2021 Oct 05. |
DOI: |
10.1089/big.2021.0067 |
Abstrakt: |
Although confirmatory modeling has dominated much of applied research in medical, business, and behavioral sciences, modeling large data sets with the goal of accurate prediction has become more widely accepted. The current practice for fitting predictive models is guided by heuristic-based modeling frameworks that lead researchers to make a series of often isolated decisions regarding data preparation and cleaning that may result in substandard predictive performance. In this article, we use an experimental design to evaluate the impact of six factors related to data preparation and model selection (techniques for numerical imputation, categorical imputation, encoding, subsampling for unbalanced data, feature selection, and machine learning algorithm) and their interactions on the predictive accuracy of models applied to a large, publicly available heart transplantation database. Our factorial experiment includes 10,800 models evaluated on 5 independent test partitions of the data. Results confirm that some decisions made early in the modeling process interact with later decisions to affect predictive performance; therefore, the current practice of making these decisions independently can negatively affect predictive outcomes. A key result of this case study is to highlight the need for improved rigor in applied predictive research. By using the scientific method to inform predictive modeling, we can work toward a framework for applied predictive modeling and a standard for reproducibility in predictive research. |
Databáze: |
MEDLINE |
Externí odkaz: |
|