Microsoft Research335 тыс
Следующее
Популярные
Опубликовано 4 ноября 2019, 10:50
Deep neural network models have been extremely successful for natural language processing (NLP) applications in recent years, but one complaint they often suffer from is their lack of interpretability. On the other hand, the field of computer vision has navigated their own way of improving interpretability for deep learning models, most notably with post-hoc interpretation methods such as saliency. In this talk, we investigate the possibility of deploying these interpretation methods to natural language processing applications. Our study covers common NLP applications such as language modeling and neural machine translation, and we stress the necessity of quantitative evaluations of interpretations apart from qualitative evaluations. We show that this adaptation is generally feasible, while also pointing out some shortcomings of the current practice that may shed light on future research directions.
Talk slides: microsoft.com/en-us/research/u...
See more on this video at Microsoft Research: microsoft.com/en-us/research/v...
Talk slides: microsoft.com/en-us/research/u...
See more on this video at Microsoft Research: microsoft.com/en-us/research/v...
Свежие видео
Случайные видео
New Way Now Sundogs rises to creative challenges for global clients with Gemini for Google Workspace