Unified Dimensionality Reduction: Formulation, Solution and Beyond

111
Опубликовано 6 сентября 2016, 16:28
In this talk, I will address the feature dimensionality reduction problem within a unified framework from three aspects: 1) Graph Embeddingand Extensions: A unified framework for general dimensionality reduction In the past decades, a large family of algorithms-supervised or unsupervised; stemming from statistics or geometry theory-has been designed to provide different solutions to the problem of dimensionality reduction. Beyond different motivations of these algorithms, I present a general formulation known as graph embedding to unify them in a common framework. Under graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of some specific intrinsic graph characterizing certain desired statistical or geometry property of a data set. Furthermore, the graph embedding framework can be used as a general platform to help develop new algorithms for dimensionality reduction, which is validated with example algorithm called Marginal Fisher Analysis (MFA). 2) Trace Ratio: A unified solution for general dimensionality reduction A large family of algorithms for dimensionality reduction end with solving a Trace Ratio problem in the form of $\arg \max_{W}Tr(W^T S_p W) / Tr(W^T S_l W)$, which is generally transformed into the corresponding Ratio Trace form $\arg \max_{W}Tr[~(W^T S_l W)^{-1}(W^T S_p W)~]$ for obtaining a closed-form but inexact solution. I propose an efficient iterative procedure to directly solve the Trace Ratio problem. In each step, a Trace Difference problem $\arg \max_{W}Tr[W^T (S_p-\lambda S_l) W]$ is solved with $\lambda$ being the trace ratio value computed from the previous step. Convergence of the projection matrix W, as well as the global optimum of the trace ratio value $\lambda$, are proven based on point-to-set map theories. 3) Element Rearrangement for Promoting Tensor Subspace Learning I will introduce an algorithm on how to promote tensor based subspace learning by rearrange the element position within a tensor.  Monotonic convergence of the algorithm is proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm.
Случайные видео
205 дней – 23 8650:20
Logoplay: World Sleep Day | Samsung
02.07.23 – 119 9390:16
#AdamSavage Drops More Stuff
23.02.23 – 24 6530:18
IP68 | Xiaomi 13 Series
автотехномузыкадетское