PAC-Bayesian Machine Learning: Learning by Optimizing a Performance Guarantee

3 121
16
Следующее
17.08.16 – 341:01:17
On Queues and Numbers
Популярные
219 дней – 1 5211:07:34
Connectivity is a thing, is THE thing
Опубликовано 17 августа 2016, 3:35
The goal of machine learning algorithms is to produce predictors having the smallest possible risk (expected loss). Since the quantity to optimize (the risk) is defined only with respect to the data-generating distribution, and not with respect to the data itself, we still do not know exactly what should be optimized on the training data in order to produce a predictor having the smallest possible risk. But a natural learning strategy is to try to optimize a good guarantee on the risk provided that such a guarantee can be computed efficiently on the available data. PAC-Bayes theory has recently emerged as a good framework for deriving such guarantees in the form of, so-called, risk bounds which can be computed on the training data. In this talk, I will present several successes that we have obtained recently using this approach---which is to first derive a risk bound and then design a learning algorithm that finds a predictor having a minimal risk bound (and, consequently the best performance guarantee).
автотехномузыкадетское