Generalization Error Minimization: A New Approach to Model Evaluation and Selection with an Application to Penalized Regression
Autor: | Ning Xu, Jian Hong, Timothy C. G. Fisher |
---|---|
Rok vydání: | 2016 |
Předmět: |
FOS: Computer and information sciences
education.field_of_study General Economics (econ.GN) Generalization Model selection Population Estimator Mathematics - Statistics Theory Machine Learning (stat.ML) Statistics Theory (math.ST) Quantitative Finance - Economics Cross-validation FOS: Economics and business Lasso (statistics) Statistics - Machine Learning Sample size determination FOS: Mathematics Applied mathematics education Selection (genetic algorithm) Mathematics |
Zdroj: | SSRN Electronic Journal. |
ISSN: | 1556-5068 |
DOI: | 10.2139/ssrn.2854003 |
Popis: | We study model evaluation and model selection from the perspective of generalization ability (GA): the ability of a model to predict outcomes in new samples from the same population. We believe that GA is one way formally to address concerns about the external validity of a model. The GA of a model estimated on a sample can be measured by its empirical out-of-sample errors, called the generalization errors (GE). We derive upper bounds for the GE, which depend on sample sizes, model complexity and the distribution of the loss function. The upper bounds can be used to evaluate the GA of a model, ex ante. We propose using generalization error minimization (GEM) as a framework for model selection. Using GEM, we are able to unify a big class of penalized regression estimators, including lasso, ridge and bridge, under the same set of assumptions. We establish finite-sample and asymptotic properties (including $\mathcal{L}_2$-consistency) of the GEM estimator for both the $n \geqslant p$ and the $n < p$ cases. We also derive the $\mathcal{L}_2$-distance between the penalized and corresponding unpenalized regression estimates. In practice, GEM can be implemented by validation or cross-validation. We show that the GE bounds can be used for selecting the optimal number of folds in $K$-fold cross-validation. We propose a variant of $R^2$, the $GR^2$, as a measure of GA, which considers both both in-sample and out-of-sample goodness of fit. Simulations are used to demonstrate our key results. The theoretical generalization and extension of arXiv:1606.00142 and arXiv:1609.03344 |
Databáze: | OpenAIRE |
Externí odkaz: |