Explaining Decisions from Vision Models and Correcting them via Human Feedback

846
12.8
Опубликовано 26 мая 2020, 20:15
Deep networks have enabled unprecedented breakthroughs in a variety of computer vision tasks. While these models enable superior performance, their increasing complexity and lack of decomposability into individually intuitive components makes them hard to interpret. Consequently, when today's intelligent systems fail, they fail spectacularly disgracefully, giving no warning or explanation.

Towards the goal of making deep networks interpretable, trustworthy and unbiased, in my talk I will present my work on building algorithms that provide explanations for decisions emanating from deep networks in order to —
• understand/interpret why the model did what it did,
• correct unwanted biases learned by AI models, and
• encourage human-like reasoning in AI.

See more at microsoft.com/en-us/research/v...
Случайные видео
92 дня – 268 16712:40
Quit Taking It Personally
152 дня – 6 9960:25
Oukitel - Wp30Pro
автотехномузыкадетское