Research Talk: Enhancing the robustness of massive language models via invariant risk minimization

715
9.9
Опубликовано 8 февраля 2022, 17:19
Speaker: Robert West, Tenure-Track Assistant Professor, EPFL

Despite the dramatic recent progress in natural language processing (NLP) afforded by large pretrained language models, important limitations remain. A growing body of work demonstrates that such models are easily fooled by adversarial attacks and have poor out-of-distribution generalization, as they tend to learn spurious, non-causal correlations. This talk explores how to reduce the impact of spurious correlations in large language models based on the so-called invariance principle, which states that only relationships invariant across training environments should be learned. It includes data showing that language models trained via invariant risk minimization (IRM), rather than the traditional expected risk minimization, achieve better out-of-distribution generalization.

Learn more about the 2021 Microsoft Research Summit: Aka.ms/researchsummit
Случайные видео
272 дня – 1 497 5660:58
Meet Xiaomi 14 Ultra | Lens to legend
345 дней – 26 3180:10
Logoplay: Year-End Holiday l Samsung
29.06.22 – 310 8227:33
The Trinity of Quality
автотехномузыкадетское