Empirical error-confidence curves for neural network and Gaussian classifiers
Autor: | Art B. Owen, David G. Stork, Gregory J. Wolff |
---|---|
Rok vydání: | 1996 |
Předmět: |
Artificial neural network
Computer Networks and Communications Gaussian Normal Distribution Reproducibility of Results Bayes Theorem General Medicine Linear discriminant analysis Confidence interval Normal distribution Bayes' theorem Naive Bayes classifier symbols.namesake Statistics symbols Confidence Intervals Bayes error rate Computer Simulation Neural Networks Computer Mathematics Probability |
Zdroj: | International journal of neural systems. 7(3) |
ISSN: | 0129-0657 |
Popis: | “Error-Confidence” measures the probability that the proportion of errors made by a classifier will be within ∊ of EB, the optimal (Bayes) error. Probably Almost Bayes (PAB) theory attempts to quantify how this confidence increases with the number of training samples. We investigate the relationship empirically by comparing average error versus number of training patterns (m) for linear and neural network classifiers. On Gaussian problems, the resulting EC curves demonstrate that the PAB bounds are extremely conservative. Asymptotic statistics predicts a linear relationship between the logarithms of the average error and the number of training patterns. For low Bayes error rates we found excellent agreement between the prediction and the linear discriminant performance. At higher Bayes error rates we still found a linear relationship, but with a shallower slope than the predicted -1. When the underlying true model is a three-layer network, the EC curves show a greater dependence on classifier capacity, and the linear predictions no longer seem to hold. |
Databáze: | OpenAIRE |
Externí odkaz: |