Visually Grounded Language Understanding and Generation

2 344
17.8
Следующее
Популярные
Опубликовано 4 ноября 2019, 12:04
In this talk, I will present our latest work on comprehending and generating visually grounded language. First, we will discuss the challenging task of learning visual grounding language. I will introduce how to pretrain task-agnostic visiolinguistic representations for a variety of vision and language tasks. In the second part of the talk, I will describe our recent work on image captioning that can produce natural language explicitly grounded in entities that object detectors find in the image. At the end of the talk, I will briefly discuss some ongoing work efforts on vision and language multi-task learning and generating goal driven visual dialog without dialog data.

Talk slides: microsoft.com/en-us/research/u...

See more on this candidate talk at Microsoft Research: microsoft.com/en-us/research/v...
Случайные видео
10.09.21 – 1 158 02319:49
OnePlus Sent Something BIG...
07.11.20 – 5 903 70159:44
Unboxing 100 Apple iPhone 12s
04.04.12 – 80 3725:48
Nokia Lumia 800 vs Lumia 900
автотехномузыкадетское