Microsoft Research334 тыс
Опубликовано 24 мая 2021, 20:32
Originally a discipline limited to academic circles, machine learning is now increasingly mainstream, being used in more visible and impactful ways. While this growing field presents huge opportunities, it also comes with unique challenges, particularly regarding fairness.
Nearly every stage of the machine learning pipeline—from task definition and dataset construction to testing and deployment—is vulnerable to biases that can cause a system to, at best, underserve users and, at worst, disadvantage already disadvantaged subpopulations.
In this webinar led by Microsoft researchers Jenn Wortman Vaughan and Hanna Wallach, 15-year veterans of the machine learning field, you'll learn how to make detecting and mitigating biases a first-order priority in your development and deployment of ML systems.
Together, you'll explore:
■ The main types of harm that can arise;
■ The subpopulations most likely to be affected;
■ The origins of these harms and strategies for mitigating them and some recently developed software tools to help.
𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲 𝗹𝗶𝘀𝘁:
■ Machine Learning & AI | NYC (Research group) - microsoft.com/en-us/research/t...
■ FATE: Fairness, Accountability, Transparency, and Ethics in AI (Research group) - microsoft.com/en-us/research/t...
■ Transparency and Intelligibility Throughout the Machine Learning Life Cycle (webinar) - microsoft.com/en-us/research/v...
■ Fairness-related harms in AI systems: Examples, assessment, and mitigation (webinar) - microsoft.com/en-us/research/v...
■ Hanna Wallach (researcher profile) - microsoft.com/en-us/research/p...
■ Jennifer Wortman Vaughan (researcher profile) - microsoft.com/en-us/research/p...
*This on-demand webinar features a previously recorded Q&A session and open captioning.
This webinar originally aired on January 22, 2019
Explore more Microsoft Research webinars: aka.ms/msrwebinars
Nearly every stage of the machine learning pipeline—from task definition and dataset construction to testing and deployment—is vulnerable to biases that can cause a system to, at best, underserve users and, at worst, disadvantage already disadvantaged subpopulations.
In this webinar led by Microsoft researchers Jenn Wortman Vaughan and Hanna Wallach, 15-year veterans of the machine learning field, you'll learn how to make detecting and mitigating biases a first-order priority in your development and deployment of ML systems.
Together, you'll explore:
■ The main types of harm that can arise;
■ The subpopulations most likely to be affected;
■ The origins of these harms and strategies for mitigating them and some recently developed software tools to help.
𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲 𝗹𝗶𝘀𝘁:
■ Machine Learning & AI | NYC (Research group) - microsoft.com/en-us/research/t...
■ FATE: Fairness, Accountability, Transparency, and Ethics in AI (Research group) - microsoft.com/en-us/research/t...
■ Transparency and Intelligibility Throughout the Machine Learning Life Cycle (webinar) - microsoft.com/en-us/research/v...
■ Fairness-related harms in AI systems: Examples, assessment, and mitigation (webinar) - microsoft.com/en-us/research/v...
■ Hanna Wallach (researcher profile) - microsoft.com/en-us/research/p...
■ Jennifer Wortman Vaughan (researcher profile) - microsoft.com/en-us/research/p...
*This on-demand webinar features a previously recorded Q&A session and open captioning.
This webinar originally aired on January 22, 2019
Explore more Microsoft Research webinars: aka.ms/msrwebinars
Свежие видео