Online Learning and Optimization from Continuous to Discrete Time

1 678
40
Следующее
Популярные
Опубликовано 11 августа 2016, 8:16
Many discrete algorithms for convex optimization and online learning can be interpreted as a discretization of a continuous-time process. Perhaps the simplest and oldest example is gradient descent, which is the discretization of the ODE $\dot X = -\nabla f(X)$. Studying the continuous-time process offers many advantages: the analysis is often simple and elegant, it provides insights into the discrete process, and can help streamline the design of algorithms (by performing the design in the continuous domain then discretizing). In this talk, I will present two such examples: In the first, I will show how some (stochastic) online learning algorithms can be obtained by discretizing an ODE on the simplex, known as the replicator dynamics. I will review properties of the ODE, then give sufficient conditions for convergence of the discrete process, by relating it to the solution trajectories of the ODE. In the second example, I will show how we can design an ODE for accelerated first-order optimization of smooth convex functions. The continuous-time design relies on an inverse Lyapunov argument: we start from an energy function which encodes the constraints of the problem and the desired convergence rate, then design dynamics tailored to that energy function. Then, by carefully discretizing the ODE, we obtain a family of accelerated algorithms with optimal rate of convergence.
Свежие видео
5 дней – 3 9642:55
VIVE Focus Vision Unboxing
12 дней – 80 1480:56
iPhone 16 Pro Natural Titanium Unboxing
Случайные видео
230 дней – 26 0160:11
Join us on Feb 25th | Xiaomi 14 Series
22.03.21 – 44 2731:41
Data Catalog in a minute
автотехномузыкадетское