Risk, Regret and Regularization: A View through the Lens of Strong Convexity

256
Опубликовано 17 августа 2016, 21:16
There are two related and complementary approaches for studying the theoretical foundations of learning. One is probabilistic in nature and the other game-theoretic and adversarial. The first approach attempts to find a predictor with low expected loss, or risk, while the other attempts to minimize the regret on each individual sequences of loss functions. Regularization plays a central role in both approaches. Starting from a high level overview of the two approaches and definitions of risk and regret, I will present examples of regularizers commonly used in single task, multi-task & multi-class learning. Then, I will try to show how the concept of strong convexity allows us to obtain risk bounds, derive new algorithms and regret bounds, and understand the relationship between probabilistic and adversarial models of learning. (Joint work with John Duchi, Sham Kakade, Shai Shalev-Shwartz, Yoram Singer and Karthik Sridharan)
Случайные видео
17.07.23 – 848 0620:56
Core i7 is Dead #shorts
26.01.21 – 5 126 2931:03
Your Phone app for Windows 10
03.04.20 – 6640:59
End-to-End Distributed Cloud
автотехномузыкадетское