Microsoft Research334 тыс
Следующее
Опубликовано 17 февраля 2023, 15:49
The COVID-19 pandemic has brought us the first global social media infodemic. While fighting this infodemic is typically thought of in terms of factuality, the problem is much broader as malicious content includes not only "fake news", rumors, and conspiracy theories, but also hate speech, racism, xenophobia, panic, and mistrust in authorities, among others. Thus, we argue for the need for a holistic approach combining the perspectives of journalists, fact-checkers, policymakers, social media platforms, and society as a whole.
We further argue for the need to analyze entire news outlets, which can be done in advance; then, we can fact-check the news before it was even written: by checking how trustworthy the outlet that has published it is (which is what journalists actually do). We will show how this can be automated by looking at a variety of information sources.
The infodemic is often described using terms such as "fake news", which mislead people to focus exclusively on factuality and to ignore the other half of the problem: the potential malicious intent. We aim to bridge this gap by focusing on the detection of specific propaganda techniques in text, e.g., appeal to emotions, fear, prejudices, logical fallacies, etc. This is the target of the ongoing SemEval-2023 task 3, which focuses on multilingual aspects of the problem, covering English, French, German, Italian, Polish, and Russian. We further present extensions of this work to the automatic analysis of various types of harmful memes: from propaganda to harmfulness and harm's target identification to role-labeling in terms of who is portrayed as hero/villain/victim, and generating natural text explanations.
Learn more about MARI: microsoft.com/en-us/research/g...
We further argue for the need to analyze entire news outlets, which can be done in advance; then, we can fact-check the news before it was even written: by checking how trustworthy the outlet that has published it is (which is what journalists actually do). We will show how this can be automated by looking at a variety of information sources.
The infodemic is often described using terms such as "fake news", which mislead people to focus exclusively on factuality and to ignore the other half of the problem: the potential malicious intent. We aim to bridge this gap by focusing on the detection of specific propaganda techniques in text, e.g., appeal to emotions, fear, prejudices, logical fallacies, etc. This is the target of the ongoing SemEval-2023 task 3, which focuses on multilingual aspects of the problem, covering English, French, German, Italian, Polish, and Russian. We further present extensions of this work to the automatic analysis of various types of harmful memes: from propaganda to harmfulness and harm's target identification to role-labeling in terms of who is portrayed as hero/villain/victim, and generating natural text explanations.
Learn more about MARI: microsoft.com/en-us/research/g...