Cheap Bandits
Autor: | Hanawal, Manjesh Kumar, Saligrama, Venkatesh, Valko, Michal, Munos, R\' emi |
---|---|
Rok vydání: | 2015 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | We consider stochastic sequential learning problems where the learner can observe the \textit{average reward of several actions}. Such a setting is interesting in many applications involving monitoring and surveillance, where the set of the actions to observe represent some (geographical) area. The importance of this setting is that in these applications, it is actually \textit{cheaper} to observe average reward of a group of actions rather than the reward of a single action. We show that when the reward is \textit{smooth} over a given graph representing the neighboring actions, we can maximize the cumulative reward of learning while \textit{minimizing the sensing cost}. In this paper we propose CheapUCB, an algorithm that matches the regret guarantees of the known algorithms for this setting and at the same time guarantees a linear cost again over them. As a by-product of our analysis, we establish a $\Omega(\sqrt{dT})$ lower bound on the cumulative regret of spectral bandits for a class of graphs with effective dimension $d$. Comment: To be presented at ICML 2015 |
Databáze: | arXiv |
Externí odkaz: |