Panel: Content moderation beyond the ban: Reducing toxic, misleading, and low-quality content
430
20.5
Microsoft Research334 тыс
Популярные
Опубликовано 8 февраля 2022, 16:41
Speakers:
Tarleton Gillespie, Senior Principal Researcher, Microsoft Research New England
Zoe Darmé, Senior Manager, Google
Ryan Calo, Lane Powell and D. Wayne Gittinger Professor, University of Washington School of Law
Sarita Schoenebeck, Associate Professor, University of Michigan
Charlotte Willner, Executive Director, Trust & Safety Professional Association
Public debate about content moderation focuses almost exclusively on removal, such as what is deleted and who is suspended. But what about content that is identified as “borderline,” which almost—but not quite—violates the guidelines? Faced with an expanding sense of responsibility, many platform companies have started identifying this type of content, as well as content that may be toxic, misleading, or harmful in the aggregate. Rather than remove it, they can minimize its effect by taking some of the following approaches: reduce its visibility in recommendations, limit its discoverability in search, add labels or warnings, or provide fact-checks or additional context. Tarleton Gillespie (Senior Principal Researcher at Microsoft) and Zoe Darmé (Senior Manager of Search at Google) host a panel that includes Sarita Schoenebeck, (Associate Professor, School of Information, University of Michigan), Ryan Calo (Professor of Law, University of Washington), and Charlotte Willner (Founding Executive Director of the Trust & Safety Professional Association).
Join us as they discuss these techniques and the questions they raise, such as: How is such content being identified? Are these approaches effective? How do users respond? How can platforms be transparent and accountable for such interventions? What are the ethical and practical implications of these approaches?
Learn more about the 2021 Microsoft Research Summit: Aka.ms/researchsummit
Tarleton Gillespie, Senior Principal Researcher, Microsoft Research New England
Zoe Darmé, Senior Manager, Google
Ryan Calo, Lane Powell and D. Wayne Gittinger Professor, University of Washington School of Law
Sarita Schoenebeck, Associate Professor, University of Michigan
Charlotte Willner, Executive Director, Trust & Safety Professional Association
Public debate about content moderation focuses almost exclusively on removal, such as what is deleted and who is suspended. But what about content that is identified as “borderline,” which almost—but not quite—violates the guidelines? Faced with an expanding sense of responsibility, many platform companies have started identifying this type of content, as well as content that may be toxic, misleading, or harmful in the aggregate. Rather than remove it, they can minimize its effect by taking some of the following approaches: reduce its visibility in recommendations, limit its discoverability in search, add labels or warnings, or provide fact-checks or additional context. Tarleton Gillespie (Senior Principal Researcher at Microsoft) and Zoe Darmé (Senior Manager of Search at Google) host a panel that includes Sarita Schoenebeck, (Associate Professor, School of Information, University of Michigan), Ryan Calo (Professor of Law, University of Washington), and Charlotte Willner (Founding Executive Director of the Trust & Safety Professional Association).
Join us as they discuss these techniques and the questions they raise, such as: How is such content being identified? Are these approaches effective? How do users respond? How can platforms be transparent and accountable for such interventions? What are the ethical and practical implications of these approaches?
Learn more about the 2021 Microsoft Research Summit: Aka.ms/researchsummit
Свежие видео
Случайные видео