Zobrazeno 1 - 10
of 304
pro vyhledávání: '"Pernot, Pascal"'
Autor:
Pernot, Pascal
This short study presents an opportunistic approach to a (more) reliable validation method for prediction uncertainty average calibration. Considering that variance-based calibration metrics (ZMS, NLL, RCE...) are quite sensitive to the presence of h
Externí odkaz:
http://arxiv.org/abs/2408.13089
Autor:
Pernot, Pascal
Some popular Machine Learning Uncertainty Quantification (ML-UQ) calibration statistics do not have predefined reference values and are mostly used in comparative studies. In consequence, calibration is almost never validated and the diagnostic is le
Externí odkaz:
http://arxiv.org/abs/2403.00423
Autor:
Pernot, Pascal
Average calibration of the (variance-based) prediction uncertainties of machine learning regression tasks can be tested in two ways: one is to estimate the calibration error (CE) as the difference between the mean absolute error (MSE) and the mean va
Externí odkaz:
http://arxiv.org/abs/2402.10043
Autor:
Pernot, Pascal
Binwise Variance Scaling (BVS) has recently been proposed as a post hoc recalibration method for prediction uncertainties of machine learning regression problems that is able of more efficient corrections than uniform variance (or temperature) scalin
Externí odkaz:
http://arxiv.org/abs/2310.11978
Autor:
Pernot, Pascal
Publikováno v:
APL Mach. Learn. 1:046121 (2023)
Reliable uncertainty quantification (UQ) in machine learning (ML) regression tasks is becoming the focus of many studies in materials and chemical science. It is now well understood that average calibration is insufficient, and most studies implement
Externí odkaz:
http://arxiv.org/abs/2309.06240
Autor:
Pernot, Pascal
Abstract Post hoc recalibration of prediction uncertainties of machine learning regression problems by isotonic regression might present a problem for bin-based calibration error statistics (e.g. ENCE). Isotonic regression often produces stratified u
Externí odkaz:
http://arxiv.org/abs/2306.05180
Autor:
Pernot, Pascal
The Expected Normalized Calibration Error (ENCE) is a popular calibration statistic used in Machine Learning to assess the quality of prediction uncertainties for regression problems. Estimation of the ENCE is based on the binning of calibration data
Externí odkaz:
http://arxiv.org/abs/2305.11905
Autor:
Pernot, Pascal
The practice of uncertainty quantification (UQ) validation, notably in machine learning for the physico-chemical sciences, rests on several graphical methods (scattering plots, calibration curves, reliability diagrams and confidence curves) which exp
Externí odkaz:
http://arxiv.org/abs/2303.07170
Autor:
Pernot, Pascal, Berthet, Jean-Paul
We review the alternative proposals introduced recently in the literature to update the standard formula to estimate the uncertainty on the mean of repeated measurements, and we compare their performances on synthetic examples with normal and non-nor
Externí odkaz:
http://arxiv.org/abs/2209.04180
Autor:
Pernot, Pascal
Confidence curves are used in uncertainty validation to assess how large uncertainties ($u_{E}$) are associated with large errors ($E$). An oracle curve is commonly used as reference to estimate the quality of the tested datasets. The oracle is a per
Externí odkaz:
http://arxiv.org/abs/2206.15272