Tight Complexity Bounds for Composite Optimization

295
Опубликовано 20 октября 2016, 20:26
We provide tight upper and lower bounds on the complexity of minimizing the average of $m$ convex functions using gradient and prox information for the component functions. We show a significant gap between the complexity of deterministic vs randomized optimization. For smooth functions, we show that accelerated gradient descent and an accelerated variant of SVRG are optimal in the deterministic and randomized settings respectively, and that a gradient oracle is sufficient for that optimal rate. For non-smooth functions, having access to prox oracle reduces the complexity and we present optimal methods based on smoothing AGD that improve over methods using just gradient accesses.

See more on this video at microsoft.com/en-us/research/v...
Случайные видео
24.01.22 – 120 64314:11
Adam Savage Examines a WWI Aircraft!
29.10.20 – 844 97417:31
The $1100 RTX 3070 Gaming PC Build
07.04.08 – 441 7721:13
Chicago transit on Google Maps
автотехномузыкадетское