Research talk: Safe reinforcement learning using advantage-based intervention

1 607
14.5
Опубликовано 8 февраля 2022, 18:06
Speaker: Nolan Wagener, Graduate Student, Georgia Tech

Many sequential decision problems involve finding a policy that maximizes total reward while obeying safety constraints. Although much recent research has focused on the development of safe reinforcement learning (RL) algorithms that produce a safe policy after training, ensuring safety during training as well remains an open problem. A fundamental challenge is performing exploration while still satisfying constraints in an unknown Markov decision process (MDP). In this work, we address this problem for the chance-constrained setting. We propose a new algorithm, SAILR, that uses an intervention mechanism, based on advantage functions, to keep the agent safe throughout training and optimizes the agent's policy using off-the-shelf RL algorithms designed for unconstrained MDPs. Our method comes with strong guarantees on safety during both training and deployment (that is, after training and without the intervention mechanism) and policy performance compared to the optimal safety-constrained policy. In our experiments, we show that SAILR violates constraints far less during training than standard safe RL and constrained MDP approaches and converges to a well-performing policy that can be deployed safely without intervention.

Learn more about the 2021 Microsoft Research Summit: Aka.ms/researchsummit
Случайные видео
222 дня – 11 3050:42
“Those stickers are NOT for you.”
автотехномузыкадетское