Microsoft Research335 тыс
Опубликовано 27 июня 2022, 17:44
GFlowNets are instances of a larger family of approaches at the intersection of generative modeling and RL that can be used to train probabilistic inference functions in a way that is related to variational inference and opens a lot of new doors, especially for brain-inspired AI. Instead of maximizing some objective (like expected return), these approaches seek to sample latent random variables from a distribution defined by an energy function, for example a posterior distribution (given past data, current observations, etc). Recent work showed how GFlowNets can be used to sample a diversity of solutions in an active learning context. We will also discuss ongoing work to explore how to train such inference machinery for learning energy-based models, to approximately marginalize over infinitely many variables, perform efficient posterior Bayesian inference and incorporate inductive biases associated with conscious processing and reasoning in humans. These inductive biases include modular knowledge representation favoring systematic generalization, the causal nature of human thoughts, concepts, explanations and plans and the sparsity of dependencies captured by reusable relational or causal knowledge. Many open questions remain to develop these ideas, which will require many collaborating minds!
Slides and video details: microsoft.com/en-us/research/v...
MSR-IISc AI Seminar Series: microsoft.com/en-us/research/e...
Slides and video details: microsoft.com/en-us/research/v...
MSR-IISc AI Seminar Series: microsoft.com/en-us/research/e...
Свежие видео
Случайные видео