Microsoft Research334 тыс
Следующее
Опубликовано 26 мая 2020, 20:15
Deep networks have enabled unprecedented breakthroughs in a variety of computer vision tasks. While these models enable superior performance, their increasing complexity and lack of decomposability into individually intuitive components makes them hard to interpret. Consequently, when today's intelligent systems fail, they fail spectacularly disgracefully, giving no warning or explanation.
Towards the goal of making deep networks interpretable, trustworthy and unbiased, in my talk I will present my work on building algorithms that provide explanations for decisions emanating from deep networks in order to —
• understand/interpret why the model did what it did,
• correct unwanted biases learned by AI models, and
• encourage human-like reasoning in AI.
See more at microsoft.com/en-us/research/v...
Towards the goal of making deep networks interpretable, trustworthy and unbiased, in my talk I will present my work on building algorithms that provide explanations for decisions emanating from deep networks in order to —
• understand/interpret why the model did what it did,
• correct unwanted biases learned by AI models, and
• encourage human-like reasoning in AI.
See more at microsoft.com/en-us/research/v...
Свежие видео