Oral Session: Sampling from Probabilistic Submodular Models

308
34.2
Опубликовано 22 июня 2016, 19:41
Submodular and supermodular functions have found wide applicability in machine learning, capturing notions such as diversity and regularity, respectively. These notions have deep consequences for optimization, and the problem of (approximately) optimizing submodular functions has received much attention. However, beyond optimization, these notions allow specifying expressive probabilistic models that can be used to quantify predictive uncertainty via marginal inference. Prominent, well-studied special cases include Ising models and determinantal point processes, but the general class of log-submodular and log-supermodular models is much richer and little studied. In this paper, we investigate the use of Markov chain Monte Carlo sampling to perform approximate inference in general log-submodular and log-supermodular models. In particular, we consider a simple Gibbs sampling procedure, and establish two sufficient conditions, the first guaranteeing polynomial-time, and the second fast (O(nlogn)) mixing. We also evaluate the efficiency of the Gibbs sampler on three examples of such models, and compare against a recently proposed variational approach.
Свежие видео
11 дней – 5 1460:36
Microsoft 365 Copilot in Word
14 дней – 72 3540:50
AirPods 4 now with ANC
17 дней – 8 3991:52
Intel IFA 2024 | Intel
автотехномузыкадетское