Microsoft Research334 тыс
Опубликовано 17 августа 2016, 21:16
There are two related and complementary approaches for studying the theoretical foundations of learning. One is probabilistic in nature and the other game-theoretic and adversarial. The first approach attempts to find a predictor with low expected loss, or risk, while the other attempts to minimize the regret on each individual sequences of loss functions. Regularization plays a central role in both approaches. Starting from a high level overview of the two approaches and definitions of risk and regret, I will present examples of regularizers commonly used in single task, multi-task & multi-class learning. Then, I will try to show how the concept of strong convexity allows us to obtain risk bounds, derive new algorithms and regret bounds, and understand the relationship between probabilistic and adversarial models of learning. (Joint work with John Duchi, Sham Kakade, Shai Shalev-Shwartz, Yoram Singer and Karthik Sridharan)