Research talk: Transformer efficiency: From model compression to training acceleration

297
Опубликовано 8 февраля 2022, 15:48
Speaker: Yu Cheng, Principal Researcher, Microsoft Research Redmond

At Microsoft Research, we are approaching large-scale AI from many different perspectives, which include not only creating new, bigger models, but also developing unique ways of optimizing AI models from training to deployment. One of the main challenges posed by larger AI models is that they are more difficult to deploy in an affordable and sustainable way, and it is also still hard for them to learn new concepts and tasks effectively. Join Microsoft Researcher Yu Cheng for the first of three lightning talks in this series on Efficient and adaptable large-scale AI. See talks from Microsoft Researchers Subho Mukherjee and Guoqing Zheng to learn more about the work Microsoft is doing to improve the efficiency of computation and data in large-scale AI models.

Learn more about the 2021 Microsoft Research Summit: Aka.ms/researchsummit
Случайные видео
357 дней – 26 26939:37
The Level1 Show May 31 2023: RIP WinRAR
автотехномузыкадетское