Vision-and-Dialog Navigation

743
13
Следующее
Популярные
18.10.22 – 1 0922:32
Project Silica 2022
Опубликовано 2 октября 2019, 16:36
Dialog-enabled smart assistants, which communicate via natural language and occupy human homes, have seen widespread adoption in recent years. These systems can communicate information, but do not manipulate objects or move themselves. By contrast, manipulation-capable and mobile robots are still largely deployed in industrial settings, but do not interact with human users. Dialog-enabled robots can bridge this gap, with natural language interfaces helping robots and non-experts collaborate to achieve their goals. In particular, navigation in unseen or dynamic environments to high-level goals (e.g., "Go to the room with a plant") can be facilitated by enabling navigation agents to ask questions in language, and to react to human clarifications on-the-fly. To study this challenge, we introduce Cooperative Vision-and-Dialog Navigation, an English language dataset situated in the Matterport Room-2-Room simulation environment.

See more at microsoft.com/en-us/research/v...
автотехномузыкадетское