Automated Learning of Interpretable Models with Quantified Uncertainty
Autor: | G.F. Bomarito, P.E. Leser, N.C.M. Strauss, K.M. Garbrecht, J.D. Hochhalter |
---|---|
Rok vydání: | 2022 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Mechanics of Materials Mechanical Engineering Computational Mechanics General Physics and Astronomy Computer Science - Neural and Evolutionary Computing Neural and Evolutionary Computing (cs.NE) Computer Science Applications Machine Learning (cs.LG) |
DOI: | 10.48550/arxiv.2205.01626 |
Popis: | Interpretability and uncertainty quantification in machine learning can provide justification for decisions, promote scientific discovery and lead to a better understanding of model behavior. Symbolic regression provides inherently interpretable machine learning, but relatively little work has focused on the use of symbolic regression on noisy data and the accompanying necessity to quantify uncertainty. A new Bayesian framework for genetic-programming-based symbolic regression (GPSR) is introduced that uses model evidence (i.e., marginal likelihood) to formulate replacement probability during the selection phase of evolution. Model parameter uncertainty is automatically quantified, enabling probabilistic predictions with each equation produced by the GPSR algorithm. Model evidence is also quantified in this process, and its use is shown to increase interpretability, improve robustness to noise, and reduce overfitting when compared to a conventional GPSR implementation on both numerical and physical experiments. |
Databáze: | OpenAIRE |
Externí odkaz: |