Microsoft Research335 тыс
Опубликовано 10 июня 2020, 18:40
Originally a discipline limited to academic circles, machine learning is now increasingly mainstream, being used in more visible and impactful ways. While this growing field presents huge opportunities, it also comes with unique challenges, particularly regarding fairness.
Nearly every stage of the machine learning pipeline—from task definition and dataset construction to testing and deployment—is vulnerable to biases that can cause a system to, at best, underserve users and, at worst, disadvantage already disadvantaged subpopulations.
In this webinar led by Microsoft researchers Jenn Wortman Vaughan and Hanna Wallach, 15-year veterans of the machine learning field, you'll learn how to make detecting and mitigating biases a first-order priority in your development and deployment of ML systems.
Together, you'll explore:
-the main types of harm that can arise;
-the subpopulations most likely to be affected;
-the origins of these harms and strategies for mitigating them;
-and some recently developed software tools to help.
See more at microsoft.com/en-us/research/v...
Nearly every stage of the machine learning pipeline—from task definition and dataset construction to testing and deployment—is vulnerable to biases that can cause a system to, at best, underserve users and, at worst, disadvantage already disadvantaged subpopulations.
In this webinar led by Microsoft researchers Jenn Wortman Vaughan and Hanna Wallach, 15-year veterans of the machine learning field, you'll learn how to make detecting and mitigating biases a first-order priority in your development and deployment of ML systems.
Together, you'll explore:
-the main types of harm that can arise;
-the subpopulations most likely to be affected;
-the origins of these harms and strategies for mitigating them;
-and some recently developed software tools to help.
See more at microsoft.com/en-us/research/v...
Свежие видео
Случайные видео