Information-theoretic analysis of stability and bias of learning algorithms

Autor: Yihong Wu, Matthew Tsao, Alexander Rakhlin, Maxim Raginsky, Aolin Xu
Rok vydání: 2016
Předmět:
Zdroj: ITW
DOI: 10.1109/itw.2016.7606789
Popis: Machine learning algorithms can be viewed as stochastic transformations that map training data to hypotheses. Following Bousquet and Elisseeff, we say that such an algorithm is stable if its output does not depend too much on any individual training example. Since stability is closely connected to generalization capabilities of learning algorithms, it is of theoretical and practical interest to obtain sharp quantitative estimates on the generalization bias of machine learning algorithms in terms of their stability properties. We propose several information-theoretic measures of algorithmic stability and use them to upper-bound the generalization bias of learning algorithms. Our framework is complementary to the information-theoretic methodology developed recently by Russo and Zou.
Databáze: OpenAIRE