Microsoft Research334 тыс
Опубликовано 7 декабря 2022, 2:43
Research Talk
Jun Zhu, Tsinghua University
Although deep learning methods have obtained significant progress in many tasks, it has been widely recognized that the current methods are vulnerable to adversarial noise. This weakness poses serious risk to safety-critical applications. In this talk, I will present some recent progress on adversarial attack and defense for deep learning, including theory, algorithms and benchmarks.
Learn more about the Responsible AI Workshop: microsoft.com/en-us/research/e...
This workshop was part of the Microsoft Research Summit 2022: microsoft.com/en-us/research/e...
Jun Zhu, Tsinghua University
Although deep learning methods have obtained significant progress in many tasks, it has been widely recognized that the current methods are vulnerable to adversarial noise. This weakness poses serious risk to safety-critical applications. In this talk, I will present some recent progress on adversarial attack and defense for deep learning, including theory, algorithms and benchmarks.
Learn more about the Responsible AI Workshop: microsoft.com/en-us/research/e...
This workshop was part of the Microsoft Research Summit 2022: microsoft.com/en-us/research/e...
Свежие видео