Multimodal Learning from Bespoke Data

381
42.3
Опубликовано 22 июня 2016, 1:42
Big Data and Deep Learning have rightly received a lot of attention, since their combination has led to breakthroughs in performance on some very hard problems. But there are many learning problems for which people don't need even a hundred examples, much less a million – learning the rules of tic-tac-toe, for example, or teaching a new assistant how to fill out a form. Understanding how to learn rapidly from a small number of examples is crucial both for understanding human cognition and to create software assistants that can function as collaborators rather than tools. This talk will describe how we have been using our Companion cognitive architecture to learn from small amounts of interaction, using natural language and sketching, about playing games and identifying sketched concepts. We rely heavily on analogical processing and qualitative representations, which enable our systems to learn with only around 10 examples, which, for some problems, is several orders of magnitude fewer examples than needed by traditional machine learning techniques.
автотехномузыкадетское