Efficient Algorithms for High Dimensional Robust Learning

3 639
14.8
Опубликовано 8 мая 2019, 22:46
We study high-dimensional estimation in a setting where an adversary is allowed to arbitrarily corrupt an $\varepsilon$-fraction of the samples. Such questions have a rich history spanning statistics, machine learning and theoretical computer science. Even in the most basic settings, for over fifty years the only known approaches were either computationally inefficient or lose dimension-dependent factors in their error guarantees. This raises the following question: is high-dimensional robust estimation even possible, algorithmically?

In this talk, we present the first computationally efficient algorithms with dimension-independent error guarantees for (1) robustly estimating the mean and covariance of a Gaussian, (2) robustly estimating the mean of a distribution with bounded second moment, and (3) robust stochastic optimization. Our algorithms achieve error that is independent of the dimension, and in many cases scales nearly optimally with the fraction of adversarially corrupted samples. Moreover, we develop a general recipe for detecting and correcting corruptions in high-dimensions, that may be applicable to many other problems. As a specific example, we demonstrate the first known defense for real-world watermarking attacks against neural networks.

Based on joint works with Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Aleksander Madry, Ankur Moitra, Jacob Steinhardt, Alistair Stewart, and Brandon Tran

See more at microsoft.com/en-us/research/v...
автотехномузыкадетское