Display Accessibility Tools

Accessibility Tools

Grayscale

Highlight Links

Change Contrast

Increase Text Size

Increase Letter Spacing

Readability Bar

Dyslexia Friendly Font

Increase Cursor Size

Evaluating the Effectiveness of Mitigation Policies Against Disinformation by David J. Butts and Michael S. Murillo

The proliferation of disinformation, misinformation, and fake news has become increasingly common in recent years. The development of powerful chatbots, such as ChatGPT, has further fueled concerns about the potential for information manipulation with malicious intent [7]. Disinformation campaigns impacted the 2016 U.S. presidential election [1, 5], contributed to vaccine hesitancy during the COVID-19 pandemic [2, 4, 9], and aided the rise of movements like QAnon [8]. Researchers have developed multiple methods to identify and combat disinformation, including disinformation tracking, bot detection, and credibility scoring. Nevertheless, the spread of malicious information remains a significant problem [6]. We have therefore explored the effectiveness of various mitigation strategies—such as content moderation, user education, and counter-campaigns—to combat disinformation [3].

Read more about it here!