Microsoft Research334 тыс
Следующее
Опубликовано 8 февраля 2022, 16:29
Speaker: Zhiyuan Liu, Associate Professor, Tsinghua University
Over the past years, large-scale pretrained models with billions of parameters have improved the state of the art in nearly every natural language processing (NLP) task. These models are fundamentally changing the research and development of NLP and AI in general. Recently, researchers are expanding such models beyond natural language texts to include more modalities, such as structured knowledge bases, images, and videos. With this background, the talks in this session are expected to introduce the latest advances in pretrained models, and also discuss the future of this research frontier. Hear from Zhiyuan Liu, Tsinghua University, in the third of three talks on recent advances and applications of language model pretraining.
Learn more about the 2021 Microsoft Research Summit: Aka.ms/researchsummit
Over the past years, large-scale pretrained models with billions of parameters have improved the state of the art in nearly every natural language processing (NLP) task. These models are fundamentally changing the research and development of NLP and AI in general. Recently, researchers are expanding such models beyond natural language texts to include more modalities, such as structured knowledge bases, images, and videos. With this background, the talks in this session are expected to introduce the latest advances in pretrained models, and also discuss the future of this research frontier. Hear from Zhiyuan Liu, Tsinghua University, in the third of three talks on recent advances and applications of language model pretraining.
Learn more about the 2021 Microsoft Research Summit: Aka.ms/researchsummit
Свежие видео
Случайные видео