Nonlinear ICA using temporal structure: a principled framework for unsupervised deep learning

2 752
14.8
Следующее
Популярные
14.02.23 – 1 9411:23:27
Automating Commonsense Reasoning
Опубликовано 15 мая 2017, 23:20
Unsupervised learning, in particular learning general nonlinear representations, is one of the deepest problems in machine learning. Estimating latent quantities in a generative model provides a principled framework, and has been successfully used in the linear case, e.g. with independent component analysis (ICA) and sparse coding. However, extending ICA to the nonlinear case has proven to be extremely difficult: A straight-forward extension is unidentifiable, i.e. it is not possible to recover those latent components that actually generated the data. Here, we show that this problem can be solved by using temporal structure. We formulate two generative models in which the data is an arbitrary but invertible nonlinear transformation of time series (components) which are statistically independent of each other. Drawing from the theory of linear ICA, we formulate two distinct classes of temporal structure of the components which enable identification, i.e. recovery of the original independent components. We show that in both cases, the actual learning can be performed by ordinary neural network training where only the input is defined in an unconventional manner, making software implementations trivial. We can rigorously prove that after such training, the units in the last hidden layer will give the original independent components. [With Hiroshi Morioka, published at NIPS2016 and AISTATS2017.]

See more on this video at microsoft.com/en-us/research/v...
Случайные видео
208 дней – 248 61310:45
Motorola Razr 2023 Review: Nonplussed
30.12.21 – 7 3131:06
Doogee V20 - Low Temperature Test
автотехномузыкадетское