Microsoft Research335 тыс
Популярные
Опубликовано 23 июля 2020, 18:15
Reinforcement Learning (RL) agents must learn to make useful decisions through action and observation alone. To be effective problem solvers, RL agents must efficiently explore vast environments, assign credit from delayed feedback, and generalize to new experiences, all while making use of limited data, computational resources, and perceptual bandwidth.
In this talk, I discuss the role that abstraction can play in overcoming these fundamental challenges of RL. I first introduce classes of state abstraction that induce a trade-off between optimality and the size of an agent's resulting abstract model, yielding a practical algorithm for learning useful and compact representations from an expert. Moreover, I show how these learned, simple representations can underlie efficient learning in complex environments. Second, I analyze the problem of searching for abstract actions that make planning more efficient. I present new computational complexity results that prove it is NP-hard to find the set of abstract actions that minimize planning time, but show this set can be approximated in polynomial time. I close by discussing a route to state-action abstractions that enjoy all of these same desirable properties. Collectively, these results establish a principled foundation for discovering abstractions that minimize the difficulty of high quality learning and decision making.
See more at microsoft.com/en-us/research/v...
In this talk, I discuss the role that abstraction can play in overcoming these fundamental challenges of RL. I first introduce classes of state abstraction that induce a trade-off between optimality and the size of an agent's resulting abstract model, yielding a practical algorithm for learning useful and compact representations from an expert. Moreover, I show how these learned, simple representations can underlie efficient learning in complex environments. Second, I analyze the problem of searching for abstract actions that make planning more efficient. I present new computational complexity results that prove it is NP-hard to find the set of abstract actions that minimize planning time, but show this set can be approximated in polynomial time. I close by discussing a route to state-action abstractions that enjoy all of these same desirable properties. Collectively, these results establish a principled foundation for discovering abstractions that minimize the difficulty of high quality learning and decision making.
See more at microsoft.com/en-us/research/v...
Свежие видео