Linear bandits with limited adaptivity and learning distributional optimal design
Autor: | Yuan Zhou, Jiaqi Yang, Yufei Ruan |
---|---|
Rok vydání: | 2021 |
Předmět: |
Computer Science::Machine Learning
FOS: Computer and information sciences Optimal design Computer Science - Machine Learning Mathematical optimization 021103 operations research Computer science Online learning Design of experiments 0211 other engineering and technologies Machine Learning (stat.ML) Context (language use) Regret 02 engineering and technology Extension (predicate logic) 01 natural sciences Machine Learning (cs.LG) 010104 statistics & probability Statistics - Machine Learning TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY Computer Science - Data Structures and Algorithms Data Structures and Algorithms (cs.DS) 0101 mathematics |
Zdroj: | STOC |
Popis: | Motivated by practical needs such as large-scale learning, we study the impact of adaptivity constraints to linear contextual bandits, a central problem in online active learning. We consider two popular limited adaptivity models in literature: batch learning and rare policy switches. We show that, when the context vectors are adversarially chosen in $d$-dimensional linear contextual bandits, the learner needs $O(d \log d \log T)$ policy switches to achieve the minimax-optimal regret, and this is optimal up to $\mathrm{poly}(\log d, \log \log T)$ factors; for stochastic context vectors, even in the more restricted batch learning model, only $O(\log \log T)$ batches are needed to achieve the optimal regret. Together with the known results in literature, our results present a complete picture about the adaptivity constraints in linear contextual bandits. Along the way, we propose the distributional optimal design, a natural extension of the optimal experiment design, and provide a both statistically and computationally efficient learning algorithm for the problem, which may be of independent interest. |
Databáze: | OpenAIRE |
Externí odkaz: |