Visually Grounded Language Understanding and Generation

2 357
17.9
Следующее
Популярные
Опубликовано 4 ноября 2019, 12:04
In this talk, I will present our latest work on comprehending and generating visually grounded language. First, we will discuss the challenging task of learning visual grounding language. I will introduce how to pretrain task-agnostic visiolinguistic representations for a variety of vision and language tasks. In the second part of the talk, I will describe our recent work on image captioning that can produce natural language explicitly grounded in entities that object detectors find in the image. At the end of the talk, I will briefly discuss some ongoing work efforts on vision and language multi-task learning and generating goal driven visual dialog without dialog data.

Talk slides: microsoft.com/en-us/research/u...

See more on this candidate talk at Microsoft Research: microsoft.com/en-us/research/v...
автотехномузыкадетское