On Gradient-Based Optimization: Accelerated, Stochastic and Nonconvex

5 705
21.1
Опубликовано 29 мая 2018, 3:18
Many new theoretical challenges have arisen in the area of gradient-based optimization for large-scale statistical data analysis, driven by the needs of applications and the opportunities provided by new hardware and software platforms. I discuss several recent, related results in this area: (1) a new framework for understanding Nesterov acceleration, obtained by taking a continuous-time, Lagrangian/Hamiltonian/symplectic perspective, (2) a discussion of how to escape saddle points efficiently in nonconvex optimization, and (3) the acceleration of Langevin diffusion.

See more at microsoft.com/en-us/research/v...
Свежие видео
17 дней – 9010:13
Black Friday deals are here🤑
19 дней – 6 2770:26
Phone link on Xiaomi 14T Series
23 дня – 50 5860:22
M4 Mac mini - Hands-On
Случайные видео
01.03.11 – 171 2050:35
Honeycomb Buzzes on
автотехномузыкадетское