Interpretability in NLP: Moving Beyond Vision

2 990
14
Следующее
04.11.19 – 3 8491:10:03
Efficient and Scalable Deep Learning
Популярные
205 дней – 21 9555:48
AutoGen Update: Complex Tasks and Agents
Опубликовано 4 ноября 2019, 10:50
Deep neural network models have been extremely successful for natural language processing (NLP) applications in recent years, but one complaint they often suffer from is their lack of interpretability. On the other hand, the field of computer vision has navigated their own way of improving interpretability for deep learning models, most notably with post-hoc interpretation methods such as saliency. In this talk, we investigate the possibility of deploying these interpretation methods to natural language processing applications. Our study covers common NLP applications such as language modeling and neural machine translation, and we stress the necessity of quantitative evaluations of interpretations apart from qualitative evaluations. We show that this adaptation is generally feasible, while also pointing out some shortcomings of the current practice that may shed light on future research directions.

Talk slides: microsoft.com/en-us/research/u...

See more on this video at Microsoft Research: microsoft.com/en-us/research/v...
автотехномузыкадетское