Microsoft Research334 тыс
Опубликовано 20 ноября 2019, 19:56
To develop an Artificial Intelligence (AI) system that can understand the world around us, it needs to be able to interpret and reason about the world we see and the language we speak. In recent years, there has been a lot of attention to research at the intersection of vision, temporal reasoning, and language. One of the major challenges is how to ensure proper grounding and perform reasoning across multiple modalities given the heterogeneity resides in the data when there is no or weak supervision of the data. For example, (1) in Vision-and-Language Navigation, how to ensure the navigation agent to identify which part of the instruction has been completed or ongoing and which part is potentially needed for the next action selection, and how to identify which direction to go by finding the part of the instruction that corresponds to the observed images. (2) In visual understanding, how to efficiently leverage object-level features for downstream visual understanding tasks like action recognition and visual captioning, how to detect interactions/relationships when there is no or only weak supervision from classification labels or ground-truth image/video descriptions. (3) In visual captioning, how to enforce generated sentences to be properly grounded without ground-truth grounding annotations.
In this talk, our goal is to leverage spatial, temporal, and language inputs for both visual and textual understanding. I will show (1) how to equip the concept of self-monitoring to a seq-to-seq model in order to develop a visual-textual co-grounded navigation agent that can follow human commands and perform backtracking when necessary, (2) how to efficiently achieve object-level fine-grained video understanding for both human action recognition and video captioning, and (3) how to enforce the visual captioning models to generate grounded descriptions without ground-truth annotations via a novel cyclical training regimen that adds no extra computation during inference.
Talk slides: microsoft.com/en-us/research/u...
See more on this and other talks at Microsoft Research: microsoft.com/en-us/research/v...
In this talk, our goal is to leverage spatial, temporal, and language inputs for both visual and textual understanding. I will show (1) how to equip the concept of self-monitoring to a seq-to-seq model in order to develop a visual-textual co-grounded navigation agent that can follow human commands and perform backtracking when necessary, (2) how to efficiently achieve object-level fine-grained video understanding for both human action recognition and video captioning, and (3) how to enforce the visual captioning models to generate grounded descriptions without ground-truth annotations via a novel cyclical training regimen that adds no extra computation during inference.
Talk slides: microsoft.com/en-us/research/u...
See more on this and other talks at Microsoft Research: microsoft.com/en-us/research/v...
Свежие видео
Случайные видео
What happens when the phone and fireworks are put together?#challenge #ruggedphone #DoogeeBlade10Max
Gift Yourself Higher Resolution, FPS, & Better Gaming With an NVIDIA GeForce RTX 40 Series Gaming PC