Research talk: Post-contextual-bandit inference

474
11.3
Опубликовано 8 февраля 2022, 17:19
Speaker: Nathan Kallus, Associate Professor, Cornell University

Contextual bandit algorithms are increasingly replacing non-adaptive A/B tests in e-commerce, healthcare, and policymaking because they can both improve outcomes for study participants and increase the chance of identifying good or even best policies. To support credible inference on novel interventions at the end of the study, nonetheless, we still want to construct valid confidence intervals on average treatment effects, subgroup effects, or value of new policies. The adaptive nature of the data collected by contextual bandit algorithms, however, makes this difficult: standard estimators are no longer asymptotically normally distributed and classic confidence intervals fail to provide correct coverage. While this has been addressed in non-contextual settings by using stabilized estimators, the contextual setting poses unique challenges that we tackle for the first time in this paper. We propose the Contextual Adaptive Doubly Robust (CADR) estimator, the first estimator for policy value that is asymptotically normal under contextual adaptive data collection. The main technical challenge in constructing CADR is designing adaptive and consistent conditional standard deviation estimators for stabilization. Extensive numerical experiments using 57 OpenML datasets demonstrate that confidence intervals based on CADR uniquely provide correct coverage.

Learn more about the 2021 Microsoft Research Summit: Aka.ms/researchsummit
Случайные видео
11 дней – 52 03014:46
The Best Case of 2024...So Far
156 дней – 2 251 4860:58
My New Robot Companion... Part 1
автотехномузыкадетское