Microsoft Research334 тыс
Опубликовано 20 мая 2021, 17:49
Auditing natural language processing (NLP) systems for computational harms remains an elusive goal. Doing so, however, is critical as there is a proliferation of language technologies (and applications) that are enabled by increasingly powerful natural language generation and representation models. Computational harms occur not only due to what content is being produced by people, but also due to how content is being embedded, represented, and generated by large-scale and sophisticated language models. This webinar will cover challenges with locating and measuring potential harms that language technologies—and the data they ingest or generate—might surface, exacerbate, or cause. Such harms can range from more overt issues, like surfacing offensive speech or reinforcing stereotypes, to more subtle issues, like nudging users toward undesirable patterns of behavior or triggering memories of traumatic events.
Join Microsoft researchers Su Lin Blodgett and Alexandra Olteanu, from the FATE Group at Microsoft Research Montréal, to examine pitfalls in some state-of-the-art approaches to measuring computational harms in language technologies. For such measurements of harms to be effective, it is important to clearly articulate both: 1) the construct to be measured and 2) how the measurements operationalize that construct. The webinar will also overview possible approaches practitioners could take to proactively identify issues that might not be on their radar, and thus effectively track and measure a wider range of issues.
Together, you'll explore:
■ Possible pitfalls when measuring computational harms in language technologies
■ Challenges to identifying what harms we should be measuring
■ Steps toward anticipating computational harms
𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲 𝗹𝗶𝘀𝘁:
■ A Critical Survey of “Bias” in NLP (Publication): microsoft.com/en-us/research/p...
■ When Are Search Completion Suggestions Problematic? (Publication): microsoft.com/en-us/research/p...
■ Social Data (Publication): microsoft.com/en-us/research/p...
■ Characterizing Problematic Email Reply Suggestions (Publication): microsoft.com/en-us/research/p...
■ Overcoming Failures of Imagination in AI Infused System Development and Deployment (Publication): microsoft.com/en-us/research/p...
■ Defining Bias with Su Lin Blodgett (Podcast): radicalai.org/bias-in-nlp
■ Language, Power and NLP (Podcast): open.spotify.com/episode/28fEQ...
■ Su Lin Blodgett (researcher profile): microsoft.com/en-us/research/p...
■ Alexandra Olteanu (researcher profile): microsoft.com/en-us/research/p...
*This on-demand webinar features a previously recorded Q&A session and open captioning.
This webinar originally aired on May 13, 2021
Explore more Microsoft Research webinars: aka.ms/msrwebinars
Join Microsoft researchers Su Lin Blodgett and Alexandra Olteanu, from the FATE Group at Microsoft Research Montréal, to examine pitfalls in some state-of-the-art approaches to measuring computational harms in language technologies. For such measurements of harms to be effective, it is important to clearly articulate both: 1) the construct to be measured and 2) how the measurements operationalize that construct. The webinar will also overview possible approaches practitioners could take to proactively identify issues that might not be on their radar, and thus effectively track and measure a wider range of issues.
Together, you'll explore:
■ Possible pitfalls when measuring computational harms in language technologies
■ Challenges to identifying what harms we should be measuring
■ Steps toward anticipating computational harms
𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲 𝗹𝗶𝘀𝘁:
■ A Critical Survey of “Bias” in NLP (Publication): microsoft.com/en-us/research/p...
■ When Are Search Completion Suggestions Problematic? (Publication): microsoft.com/en-us/research/p...
■ Social Data (Publication): microsoft.com/en-us/research/p...
■ Characterizing Problematic Email Reply Suggestions (Publication): microsoft.com/en-us/research/p...
■ Overcoming Failures of Imagination in AI Infused System Development and Deployment (Publication): microsoft.com/en-us/research/p...
■ Defining Bias with Su Lin Blodgett (Podcast): radicalai.org/bias-in-nlp
■ Language, Power and NLP (Podcast): open.spotify.com/episode/28fEQ...
■ Su Lin Blodgett (researcher profile): microsoft.com/en-us/research/p...
■ Alexandra Olteanu (researcher profile): microsoft.com/en-us/research/p...
*This on-demand webinar features a previously recorded Q&A session and open captioning.
This webinar originally aired on May 13, 2021
Explore more Microsoft Research webinars: aka.ms/msrwebinars