Microsoft Research334 тыс
Опубликовано 6 апреля 2021, 20:10
Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general-domain corpora, such as in newswire and web text. Biomedical text is very different from general-domain text, yet biomedical NLP has been relatively underexplored. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models.
In this webinar, Microsoft researchers Hoifung Poon, Senior Director of Biomedical NLP, and Jianfeng Gao, Distinguished Scientist, will challenge this assumption by showing that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models.
You will begin with understanding how biomedical text differs from general-domain text and how biomedical NLP poses substantial challenges that are not present in mainstream NLP. You will also learn about the two paradigms for domain-specific language model pretraining and see how pretraining from scratch significantly outperforms mixed-domain pretraining in a wide range of biomedical NLP tasks. Finally, find out about our comprehensive benchmark and leaderboard created specifically for biomedical NLP, called BLURB, and see how our biomedical language model, PubMedBERT, sets a new state of the art.
Together, you'll explore:
■ How biomedical NLP differs from mainstream NLP
■ A shift in approach to pretraining language models for specialized domains
■ BLURB: a comprehensive benchmark and leaderboard for biomedical NLP
■ PubMedBERT: the state-of-the-art biomedical language model pretrained from scratch on biomedical text
𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲 𝗹𝗶𝘀𝘁:
■ BioMed NLP Group - microsoft.com/en-us/research/g...
■ Hanover (Project page): microsoft.com/en-us/research/p...
■ Deep Learning (Group Page): microsoft.com/en-us/research/g...
■ BLURB (GitHub): microsoft.github.io/BLURB
■ PubMedBERT (GitHub): microsoft.github.io/BLURB/mode...
■ Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing (Paper): microsoft.com/en-us/research/p...
■ Hoifung Poon (profile page): microsoft.com/en-us/research/p...
■ Jianfeng Gao (profile page): microsoft.com/en-us/research/p...
*This on-demand webinar features a previously recorded Q&A session and open captioning.
This webinar originally aired on October 15, 2020
Explore more Microsoft Research webinars: aka.ms/msrwebinars
In this webinar, Microsoft researchers Hoifung Poon, Senior Director of Biomedical NLP, and Jianfeng Gao, Distinguished Scientist, will challenge this assumption by showing that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models.
You will begin with understanding how biomedical text differs from general-domain text and how biomedical NLP poses substantial challenges that are not present in mainstream NLP. You will also learn about the two paradigms for domain-specific language model pretraining and see how pretraining from scratch significantly outperforms mixed-domain pretraining in a wide range of biomedical NLP tasks. Finally, find out about our comprehensive benchmark and leaderboard created specifically for biomedical NLP, called BLURB, and see how our biomedical language model, PubMedBERT, sets a new state of the art.
Together, you'll explore:
■ How biomedical NLP differs from mainstream NLP
■ A shift in approach to pretraining language models for specialized domains
■ BLURB: a comprehensive benchmark and leaderboard for biomedical NLP
■ PubMedBERT: the state-of-the-art biomedical language model pretrained from scratch on biomedical text
𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲 𝗹𝗶𝘀𝘁:
■ BioMed NLP Group - microsoft.com/en-us/research/g...
■ Hanover (Project page): microsoft.com/en-us/research/p...
■ Deep Learning (Group Page): microsoft.com/en-us/research/g...
■ BLURB (GitHub): microsoft.github.io/BLURB
■ PubMedBERT (GitHub): microsoft.github.io/BLURB/mode...
■ Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing (Paper): microsoft.com/en-us/research/p...
■ Hoifung Poon (profile page): microsoft.com/en-us/research/p...
■ Jianfeng Gao (profile page): microsoft.com/en-us/research/p...
*This on-demand webinar features a previously recorded Q&A session and open captioning.
This webinar originally aired on October 15, 2020
Explore more Microsoft Research webinars: aka.ms/msrwebinars
Свежие видео
Случайные видео
Blade 10 Max 10300mAh Battery Life VS iPhone 16 Series Battery Life💨😮#tech #shorts #DoogeeBlade10Max