Lightning talks: Advances in fairness in AI: New directions

181
Опубликовано 8 февраля 2022, 16:45
Over the past few years, we’ve seen that artificial intelligence (AI) and machine learning (ML) provide us with new opportunities, but they also raise new challenges. Most notably, these challenges have highlighted the various ways in which AI systems can promote unfairness or reinforce existing societal stereotypes. While we can often spot fairness-related harms in AI systems when we see them, there’s no one-size-fits-all definition of fairness that applies to all AI systems in all contexts. Additionally, there are many reasons why AI systems can behave unfairly. In this session, we discuss strategies for mitigating fairness-related harms and the research questions that are raised when working on fairness in AI systems. We cover fairness in recommendation systems, present how checklists can support fairness in the AI lifecycle, and discuss research questions on the challenges of measuring computational harms and the trade-offs in choosing an appropriate fairness metric.

Introduction
Speaker: Amit Sharma, Senior Researcher, Microsoft Research India

Fairness via post-processing in web-scale recommender systems
Speaker: Kinjal Basu, Tech Lead for Responsible AI, LinkedIn

Designing checklists to support fairness in the AI lifecycle
Speaker: Michael Madaio, Postdoctoral Researcher, Microsoft Research NYC

Challenges to the discovery and measurement of computational harms
Speaker: Alexandra Olteanu, Principal Researcher, Microsoft Research Montréal

A fine balance: Individual-fairness and group-fairness
Speaker: Amit Deshpande, Senior Researcher, Microsoft Research India

Learn more about the 2021 Microsoft Research Summit: Aka.ms/researchsummit
автотехномузыкадетское