Microsoft Research333 тыс
Опубликовано 19 мая 2017, 3:52
An important problem in achieving general artificial intelligence is the data-efficient learning of representations suitable for causal reasoning, planning, and decision making. Learning such representations from unsupervised data is challenging and requires flexible models to discover the underlying manifold of high-dimensional data. Generative adversarial networks (GAN) are such flexible families of distributions that have shown promise in unsupervised learning and supervised regression tasks. We show that the learning objective of GANs are variational bounds on a divergence between two distributions, allowing us to extend the GAN objective to general f-divergences, including the Kullback-Leibler divergence. We call this more general principle variational divergence minimization. The generalization of GANs to f-divergences also allows us to treat GANs as a building block in standard machine learning problems. We demonstrate this by extending the variational Bayes inference procedure to the adversarial case, allowing us to use likelihood-free variational families and provide more accurate posterior inferences. GANs therefore are both promising as a building block in larger system and for solving the unsupervised learning problem.
See more on this video at microsoft.com/en-us/research/v...
See more on this video at microsoft.com/en-us/research/v...
Свежие видео