Microsoft Research334 тыс
Опубликовано 25 марта 2021, 20:35
AI has transformed modern life via previously unthinkable feats, from machines that can master the ancient board game Go and self-driving cars to developments we experience more routinely, such as virtual agents and personalized product recommendations. Simultaneously, these new opportunities have raised new challenges—most notably, challenges that have highlighted the potential for AI systems to cause fairness-related harms. Indeed, the fairness of AI systems is one of the key concerns facing society as AI continues to influence our lives in new ways.
In this webinar, Microsoft researchers Hanna Wallach and Miroslav Dudík will guide you through how AI systems can lead to a variety of fairness-related harms. They will then dive deeper into assessing and mitigating two specific types: allocation harms and quality-of-service harms. Allocation harms occur when AI systems allocate resources or opportunities in ways that can have significant negative impacts on people’s lives, often in high-stakes domains like education, employment, finance, and healthcare. Quality-of-service harms occur when AI systems, such as speech recognition or face detection systems, fail to provide a similar quality of service to different groups of people.
Together, you’ll explore:
■ Examples of fairness-related harms and where these harms originate
■ Assessment methods for allocation harms and quality-of-service harms
■ Unfairness mitigation algorithms, including when they can and can’t be used and what their advantages and disadvantages are
𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲 𝗹𝗶𝘀𝘁:
■ Microsoft’s RAI resource center: microsoft.com/en-us/ai/respons...
■ Microsoft’s FATE research group: microsoft.com/en-us/research/t...
■ Fairlearn toolkit: fairlearn.github.io
■ Hanna Wallach (Researcher Profile): microsoft.com/en-us/research/p...
■ Miro Dudik (Researcher Profile): microsoft.com/en-us/research/p...
*This on-demand webinar features a previously recorded Q&A session and open captioning.
Explore more Microsoft Research webinars: aka.ms/msrwebinars
In this webinar, Microsoft researchers Hanna Wallach and Miroslav Dudík will guide you through how AI systems can lead to a variety of fairness-related harms. They will then dive deeper into assessing and mitigating two specific types: allocation harms and quality-of-service harms. Allocation harms occur when AI systems allocate resources or opportunities in ways that can have significant negative impacts on people’s lives, often in high-stakes domains like education, employment, finance, and healthcare. Quality-of-service harms occur when AI systems, such as speech recognition or face detection systems, fail to provide a similar quality of service to different groups of people.
Together, you’ll explore:
■ Examples of fairness-related harms and where these harms originate
■ Assessment methods for allocation harms and quality-of-service harms
■ Unfairness mitigation algorithms, including when they can and can’t be used and what their advantages and disadvantages are
𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲 𝗹𝗶𝘀𝘁:
■ Microsoft’s RAI resource center: microsoft.com/en-us/ai/respons...
■ Microsoft’s FATE research group: microsoft.com/en-us/research/t...
■ Fairlearn toolkit: fairlearn.github.io
■ Hanna Wallach (Researcher Profile): microsoft.com/en-us/research/p...
■ Miro Dudik (Researcher Profile): microsoft.com/en-us/research/p...
*This on-demand webinar features a previously recorded Q&A session and open captioning.
Explore more Microsoft Research webinars: aka.ms/msrwebinars
Свежие видео
Случайные видео
How do I fix the error “canceling statement due to conflict with recovery” for a read replica query?