Research talk: Knowledgeable pre-trained language models

556
14.3
Опубликовано 8 февраля 2022, 16:29
Speaker: Zhiyuan Liu, Associate Professor, Tsinghua University

Over the past years, large-scale pretrained models with billions of parameters have improved the state of the art in nearly every natural language processing (NLP) task. These models are fundamentally changing the research and development of NLP and AI in general. Recently, researchers are expanding such models beyond natural language texts to include more modalities, such as structured knowledge bases, images, and videos. With this background, the talks in this session are expected to introduce the latest advances in pretrained models, and also discuss the future of this research frontier. Hear from Zhiyuan Liu, Tsinghua University, in the third of three talks on recent advances and applications of language model pretraining.

Learn more about the 2021 Microsoft Research Summit: Aka.ms/researchsummit
Свежие видео
2 дня – 1 52946:33
090: Scroll-driven animations
3 дня – 4 7151:26
Nokia
4 дня – 85 1880:27
Connecting to your smart life
автотехномузыкадетское