Bandit Learning with Switching Costs

411
45.7
Опубликовано 28 июня 2016, 20:25
Consider the adversarial two-armed bandit problem in a setting where the player incurs a unit cost each time he switches actions. We prove that the player's T-round regret in this setting (i.e., his excess loss compared to the better of the two actions) is T 2/3 (up to a log term). In the corresponding full-information problem, the minimax regret is known to grow at a slower rate of T 1/2 . The difference between these two rates indicates that learning with bandit feedback (i.e. just knowing the loss from the player's action, not the alternative) can be significantly harder than learning with full-information feedback. It also shows that without switching costs, any regret-minimizing algorithm for the bandit problem must sometimes switch actions very frequently. The proof is based on an information-theoretic analysis of a loss process arising from a multi-scale random walk. (Joint work with Ofer Dekel, Jian Ding and Tomer Koren, to appear in STOC 2014 available at arxiv.org/abs/1310.2997)
автотехномузыкадетское