Research talk: Differentially private fine-tuning of large language models

1 081
30
Опубликовано 27 октября 2022, 15:40
We have come a long way in terms of protecting privacy when training ML models, particularly with large language models. We recently demonstrated that using differentially private stochastic gradient descent (DP-SGD) to fine-tune very large language models, such as GPT-3, is not only feasible but shows very promising results with respect to the privacy-utility tradeoff. In this talk, we highlight the challenges we have overcome over the past year and the opportunities our research enables for a range of product applications.

#MSFTResearchSummit

See related sessions in this track: microsoft.com/en-us/research/v...

Learn more about the 2022 Microsoft Research Summit: microsoft.com/en-us/research/e...
Случайные видео
176 дней – 118 02111:29
The New Phanteks Evolv X2 Is Mind Blowing!
212 дней – 522 46413:18
Tiny 3D-printed PC vs. Fastest Xbox
15.12.19 – 192 5637:28
PC Hardware I'm Excited for in 2020!
автотехномузыкадетское