Inference and Learning with Random Maximum A-Posteriori Perturbations

220
Опубликовано 28 июля 2016, 1:55
Learning and inference in complex models drives much of the research in machine learning applications, from computer vision, natural language processing, to computational biology. The inference problem in such cases involves assessing the weights of possible structures, whether objects, parsers, or molecular structures. Although it is often feasible to only find the most likely or maximum a-posteriori (MAP) assignment rather than considering all possible assignment, MAP inference is limited when there are other likely assignments. In a fully probabilistic treatment, all possible alternative assignments are considered thus requiring summing over the assignments with their respective weights which is considerably harder (#P hard vs NP hard). The main surprising result of our work is that MAP inference (maximization) can be used to approximate and bound the weighted counting. This leads us to a new approximate inference framework that is based on MAP-statistics, thus does not depend on pseudo-probabilities, contrasting the current framework of Bethe approximations which lacks statistical meaning. This approach excels in regimes where there are several but not exponentially many prominent assignments. For example, this happens in cases where observations carry strong signals (local evidence) but are also guided by strong consistency constraints (couplings).
автотехномузыкадетское