Microsoft Research334 тыс
Следующее
Опубликовано 8 февраля 2022, 16:46
Speakers:
Hal Daumé III, Sr Principal Researcher, Microsoft Research NYC
Steven Bird, Professor, Charles Darwin University
Su Lin Blodgett, Postdoctoral Researcher, Microsoft Research Montréal
Margaret Mitchell, CEO & Research Scientist, Ethical AI LLC
Hanna Wallach, Partner Research Manager, Microsoft Research NYC
Language is one of the main ways in which people understand and construct the social world. Current language technologies can contribute positively to this process—by challenging existing power dynamics, or negatively—by reproducing or exacerbating existing social inequities. In this panel, we will discuss existing concerns and opportunities related to the fairness, accountability, transparency, and ethics (FATE) of language technologies and the data they ingest or generate. It’s important to address these matters because language technologies might surface, replicate, exacerbate or even cause a range of computational harms—from exposing offensive speech or reinforcing stereotypes, to even more subtle issues, like nudging users towards undesirable patterns of behavior or triggering memories of traumatic events. In this session, we’ll cover such critical questions as: How can we reliably measure fairness-related and other computational harms? Whose data is included in training a model, and who is excluded as a result? How do we better foresee potential computational harms from language technologies?
Learn more about the 2021 Microsoft Research Summit: Aka.ms/researchsummit
Hal Daumé III, Sr Principal Researcher, Microsoft Research NYC
Steven Bird, Professor, Charles Darwin University
Su Lin Blodgett, Postdoctoral Researcher, Microsoft Research Montréal
Margaret Mitchell, CEO & Research Scientist, Ethical AI LLC
Hanna Wallach, Partner Research Manager, Microsoft Research NYC
Language is one of the main ways in which people understand and construct the social world. Current language technologies can contribute positively to this process—by challenging existing power dynamics, or negatively—by reproducing or exacerbating existing social inequities. In this panel, we will discuss existing concerns and opportunities related to the fairness, accountability, transparency, and ethics (FATE) of language technologies and the data they ingest or generate. It’s important to address these matters because language technologies might surface, replicate, exacerbate or even cause a range of computational harms—from exposing offensive speech or reinforcing stereotypes, to even more subtle issues, like nudging users towards undesirable patterns of behavior or triggering memories of traumatic events. In this session, we’ll cover such critical questions as: How can we reliably measure fairness-related and other computational harms? Whose data is included in training a model, and who is excluded as a result? How do we better foresee potential computational harms from language technologies?
Learn more about the 2021 Microsoft Research Summit: Aka.ms/researchsummit
Свежие видео
Случайные видео