[SAIF 2020] Day 1: Towards Discovering Casual Representations - Yoshua Bengio | Samsung

9 430
8
Samsung7.03 млн
Опубликовано 11 ноября 2020, 0:06
Up to now deep learning has focused on learning representations which are useful in many applications but differ from the kind of high-level representations humans can communicate with natural language and capture semantic (verbalizable) variables and their causal dependencies. Capturing causal structure is important for many reasons: (a) it allows an agent to take appropriate decisions (interventions) by having a good causal model of its effects, (b) it can lead to robustness with respect to changes in distribution (a major current limitation of state-of-the-art machine learning), and (c) it makes it easier to understand natural language (which refers to such causal concepts, the semantic variables named with words) and thus interact more meaningfully with humans. Whereas causality research has focused on inference (like how strong is the causal effect of A on B?) and to a lesser extent on causal discovery (is A a direct cause of B?), an important open question to which deep learning researchers can contribute is that of discovering causal representations, i.e., transformations from low-level sensory data to high-level representations of causal variables, where the high-level variables are not always labeled by humans. This must necessarily be done at the same time as one learns the structure of the causal graph which links these variables since both are generally unknown. This talk will report on early efforts towards these objectives, as part of a larger research programme aimed at expanding deep learning from system 1 (unconscious) processing to system 2 (conscious-level) processing of semantic variables.

#SAIF #SamsungAIForum

For more info, visit our page:
#SAIT(Samsung Advanced Institute of Technology): smsng.co/sait
автотехномузыкадетское