On The Hardness of Reinforcement Learning With Value-Function Approximation

2 943
17.5
Следующее
Популярные
Опубликовано 19 июня 2019, 20:56
Value-function approximation methods that operate in batch mode have foundational importance to reinforcement learning (RL). Finite sample guarantees for these methods—which provide the theoretical backbones for empirical ("deep") RL today—crucially rely on strong representation assumptions, e.g., that the function class is closed under Bellman update. Given that such assumptions are much stronger and less desirable than the ones needed for supervised learning (e.g., realizability), it is important to confirm the hardness of learning in their absence. Such a hardness result would also be a crucial piece of a bigger picture on the tractability of various RL settings. Unfortunately, while algorithm-specific lower bound has existed for decades, the information-theoretic hardness remains a mystery. In this talk I will introduce the mathematical setup for studying value-function approximation, introduce our findings in the investigation of the hardness conjecture, and discuss connections to related results/open problems and their implications. Part of the talk will be based on work with my student Jinglin Chen accepted to ICML-19.

See more at microsoft.com/en-us/research/v...
Свежие видео
2 дня – 2 9360:47
CSS light-dark function
7 дней – 1 5407:37
Building AppSheet Chat apps
8 дней – 9 0470:53
M4 Mac mini (2024) Unboxing
Случайные видео
64 дня – 3 48723:13
088: State queries
13.06.23 – 14 7033:30
From beauty queen to Google Cloud
13.07.20 – 1 0402:31
Banggood's Summer Prime Sale 2020
05.01.16 – 4 8242:29
AliExpress - Xiaomi Pad 2
автотехномузыкадетское