Priors for Deep Networks: Limit theorems, pitfalls, open questions

1 520
33.8
Опубликовано 20 апреля 2018, 23:15
Much research in Bayesian Deep Learning is about approximating the posterior. With some notable exceptions, the choice of prior is less often considered. In this second direction, we discuss recent work on central limit theorems for neural networks with more than one hidden layer, some thoughts about over-confident extrapolation, and the dangers of improper priors from the literature.

See more at microsoft.com/en-us/research/v...
автотехномузыкадетское