Learning, Mixing, and Complexity- A Free Ride on the Second Law

192
Опубликовано 22 июня 2016, 3:04
The principle of maximum entropy states that, given some data, among all hypothetical probability distributions that agree with the data, the one of maximum entropy best represents the current state of knowledge. It might be natural to expect that this philosophy often yields a "simple" hypothesis since it tries to avoid making the hypothesis more informative than it deserves to be. Viewing entropy as an additional resource to be optimized is an extremely powerful idea with a wide range of applications (and a correspondingly large array of names: boosting, entropy-regularized gradient descent, multiplicative weights update, log-Sobolev inequalities, Gibbs measures, etc.). I will focus specifically on the role of entropy maximization in encouraging simplicity. This has a number of surprising applications in discrete mathematics and the theory of computation. We'll see three instantiations of this principle: in additive number theory, functional analysis, and complexity theory. For the last application, it will turn out that one needs to extend max-entropy to the setting of quantum information and von Neumann entropy. The philosophy and applications will be discussed at a high level suitable for a general scientific audience.
автотехномузыкадетское