Time discretization invariance in Machine Learning, applications to reinforcement learning...
1 682
16
Microsoft Research334 тыс
Опубликовано 4 сентября 2019, 15:33
While computers are well equipped to deal with discrete flows of data, the real world often provides intrisically continuous time data sequences, e.g. visual, sensory streams, time series, or state variables in continuous control environments. Most algorithms and notably machine learning approaches require discretization of time continuous data flows, introducing a notion of processing discretization timestep. Using smaller discretization timesteps usually provides more information to the processing algorithm, and should normally be associated with better algorithm performance. However, many commonly used algorithms fail to follow this trend, their performance decreases with smaller discretization timesteps, and dramatically drops when the discretization timestep approaches 0. In this talk, I will focus on the design of time discretization invariant algorithms, i.e. algorithms that work for any given time discretization, and notably remain viable for very small time discretizations. Such algorithms often rely on the design of a theoretical, inherently time continuous, but untractable algorithm, that is then discretized. Algorithms that don't scale to small time discretization typically don't admit such a time continuous limit algorithm. The talk will focus on two specific applications, namely the design of Q-learning approaches robust to time discretization, and the analysis of time discretization invariant architectures for recurrent neural networks. Beside the practical benefits, I will show that time discretization invariant designs provide interesting theoretical insights, and, for instance, lead to rethinking some widely spread exploration strategies, or shed new light on the use of gating mechanisms in recurrent networks.
See more at microsoft.com/en-us/research/v...
See more at microsoft.com/en-us/research/v...
Свежие видео