Popis: |
A popular topic of argument among baseball fans is the prospective Hall of Fame status of current and recently retired players. A player's probability of enshrinement is likely to be affected by a large number of different variables, and can be approached by machine learning methods. In particular, I consider the use of random forests for this purpose. A random forest may be considered a black-box method for predicting the probability of Hall of Fame induction, but a number of parameters must be chosen before the forest can be grown. These parameters include fundamental aspects of the nuts and bolts of the construction of the trees that make up the forest, as well as choices among possible predictor variables. For example, one predictor that may be considered is a measure of the player's having seasons with many home runs hit, and there are multiple competing ways of measuring this. Furthermore, certain deterministic methods of searching the parameter space are partially undermined by the randomness underlying the forest's construction and the fact that, by sheer luck, two forests constructed with the same parameters may have differing qualities of fit. Using simulated annealing, I move through the parameter space in a stochastic fashion, trying many forests and sometimes moving toward a set of parameters even though its fit is apparently not as good as preceding ones. Since probabilities defined based on the votes of terminal nodes of a random forest for classification tend to be too moderate, the results of each forest considered are fed into a logistic regression to produce final probability estimates. From among four simulated annealing runs, the forest with the smallest mean squared error was selected, and analysis of the forests near it in the simulated annealing run indicate that its selection was probably not due to extraordinary "luck." Predictions performed using the out-of-bag samples correctly identify 75% of Baseball Writers Association of America Hall of Fame selections, while misclassifying only 1% of non-selections. Results indicate a smaller mean squared error than a previous neural network approach, although the large number of forests tried and discarded raised concerns about overfitting in this case. |