The Median Hypothesis

86
Опубликовано 27 июля 2016, 23:25
ABSTRACT: A typical learning process begins with some prior beliefs about the target concept. These beliefs are refined using evidence, typically a sample. The evidence is used to compute a �posterior belief�, which assigns a probability distribution over the hypotheses class. The Bayes optimal hypothesis is the average hypothesis, weighted by the posterior. However, this hypothesis is often prohibitively costly to compute. Therefore, there is a need for an algorithm to construct a hypothesis (either a single hypothesis or some combination), given the posterior belief. Several methods have been proposed for this problem: for example, Gibbs sampling, ensemble methods, and choosing the maximum posterior. We propose a new method: choosing the median hypothesis. This method is close to the average Gibbs classifier and Bayes optimal classifier in terms of accuracy while having the same run-time efficiency, during the generalization phase, as the maximum posterior method. In this talk, we will define a measure of depth for hypotheses, from which we derive the median hypothesis. We present generalization bounds which leverage the PAC-Bayes analysis technique. We present an algorithm to approximate the median hypotheses and we prove its correctness. Our definition of median is closely related to Tukey's median; in fact our algorithm provides a polynomial approximation to the problem of finding the Tukey median. This is a joint work with Chris J. C. Burges
автотехномузыкадетское