Research talk: WebQA: Multihop and multimodal

227
Опубликовано 8 февраля 2022, 16:32
Speaker: Yonatan Bisk, Assistant Professor, Carnegie Mellon University

Web search is fundamentally multimodal and multihop. Often, even before asking a question, individuals go directly to image search to find answers. Further, rarely do we find an answer from a single source, opting instead to aggregate information and reason through implications. Despite the frequency of this everyday occurrence, at present there is no unified question-answering benchmark that requires a single model to answer long-form natural language questions from text and open-ended visual sources that is akin to human experience. The researchers propose to bridge this gap between the natural language and computer vision communities with WebQA. They show that multihop text queries are difficult for a large-scale transformer model, and they also show that existing multi-modal transformers and visual representations do not perform well on open-domain visual queries. Our challenge for the community is to create a unified multimodal reasoning model that seamlessly transitions and reasons regardless of the source modality.

Learn more about the 2021 Microsoft Research Summit: Aka.ms/researchsummit
Случайные видео
145 дней – 22 9340:20
Xiaomi 14 Ultra Miniature Marvels
06.02.21 – 7 0621:42
AWS Nitro Enclaves Overview
02.09.09 – 19 5413:00
Home Media Center Tour
автотехномузыкадетское