Microsoft Research335 тыс
Опубликовано 21 ноября 2019, 1:22
The reliability of machine learning systems in the presence of adversarial noise has become a major field of study in recent years. As ML is being used for increasingly security sensitive applications and is trained in increasingly unreliable data, the ability for learning algorithms to tolerate worst-case noise has become more and more important. In this talk, I'll survey a number of recent results in this area, both theoretical and more applied. We will survey recent advances in robust statistics, data poisoning, and adversarial examples for neural networks. The overarching goal is to give provably robust algorithms for these problems, which still perform well in practice.
Talk slides: microsoft.com/en-us/research/u...
See more on this and other talks at Microsoft Research: microsoft.com/en-us/research/v...
Talk slides: microsoft.com/en-us/research/u...
See more on this and other talks at Microsoft Research: microsoft.com/en-us/research/v...
Свежие видео