NIPS: Oral Session 7 - Odalric-Ambryn Maillard

127
Опубликовано 18 августа 2016, 18:38
In Reinforcement Learning (RL), state-of-the-art algorithms require a large number of samples per state-action pair to estimate the transition kernel p . In many problems, a good approximation of p is not needed. For instance, if from one state-action pair (s,a) , one can only transit to states with the same value, learning p(Γïà|s,a) accurately is irrelevant (only its support matters). This paper aims at capturing such behavior by defining a novel hardness measure for Markov Decision Processes (MDPs) we call the {\em distribution-norm}. The distribution-norm w.r.t.~a measure ╬╜ is defined on zero ╬╜ -mean functions f by the standard variation of f with respect to ╬╜ . We first provide a concentration inequality for the dual of the distribution-norm. This allows us to replace the generic but loose ||Γïà||1 concentration inequalities used in most previous analysis of RL algorithms, to benefit from this new hardness measure. We then show that several common RL benchmarks have low hardness when measured using the new norm. The distribution-norm captures finer properties than the number of states or the diameter and can be used to assess the difficulty of MDPs.
автотехномузыкадетское