Information-theoretic analysis of stability and bias of learning algorithms
Autor: | Yihong Wu, Matthew Tsao, Alexander Rakhlin, Maxim Raginsky, Aolin Xu |
---|---|
Rok vydání: | 2016 |
Předmět: |
Weighted Majority Algorithm
business.industry Computer science Active learning (machine learning) Algorithmic learning theory Stability (learning theory) Online machine learning Multi-task learning 020206 networking & telecommunications 02 engineering and technology 01 natural sciences 010305 fluids & plasmas Computational learning theory 0103 physical sciences 0202 electrical engineering electronic engineering information engineering Artificial intelligence Instance-based learning business Algorithm |
Zdroj: | ITW |
DOI: | 10.1109/itw.2016.7606789 |
Popis: | Machine learning algorithms can be viewed as stochastic transformations that map training data to hypotheses. Following Bousquet and Elisseeff, we say that such an algorithm is stable if its output does not depend too much on any individual training example. Since stability is closely connected to generalization capabilities of learning algorithms, it is of theoretical and practical interest to obtain sharp quantitative estimates on the generalization bias of machine learning algorithms in terms of their stability properties. We propose several information-theoretic measures of algorithmic stability and use them to upper-bound the generalization bias of learning algorithms. Our framework is complementary to the information-theoretic methodology developed recently by Russo and Zou. |
Databáze: | OpenAIRE |
Externí odkaz: |