Efficient Minimization of Risk Measures via Smoothing: Theory and Applications

76
Опубликовано 28 июля 2016, 1:25
At the heart of most machine learning problems lies a regularized risk minimization problem. With the explosion of machine learning techniques and applications, it becomes imperative to come up with faster algorithms for optimizing these objectives. We develop a novel smoothing strategy motivated by Nesterov�s accelerated gradient descent methods which improves upon previous first order algorithms to get faster optimal rates of convergence of $O(1/\sqrt{\epsilon})$ for various nonsmooth machine learning objectives including the binary linear SVMs, structured output prediction. Additionally we show that such smoothing helps us attack multivariate measures like ROC Score and Precision recall break-even point, which are not additive over the individual data points. We obtain orders of magnitude improvements in experimental results. We will also show connections of how these schemes can be used to get faster algorithms for certain computational geometry problems like finding minimum enclosing convex shapes.
автотехномузыкадетское