“When do Misinformation Policies (not) Work?”

Teaser: Several policies have been proposed (by platforms and regulators alike) to combat the spread and influence of misinformation. We analyze how these policies affect social media users who update their beliefs in different ways, either via sophisticated (Bayesian) inference or naive (rule-of-thumb) updating. These commonly suggested policies have drastically different effects depending on the population, with many of them often backfiring if not used with caution.

Abstract: We use a simple model to analyze several policies currently proposed in the public sphere to stop the influence of misinformation. We show that the efficacy of these policies crucially depends on the strategic sophistication and reasoning abilities in the population. We focus on the following policies: censorship, where news can be moderated by governments or social media platforms; content diversification, where agents are given news representing different viewpoints or are shown news that are counter to their prevailing beliefs;  accuracy nudging, where agents are encouraged to think about news more critically; and performance targets, where social media outlets try to regulate the amount of misinformation on their platforms. We show that policies that work well for naive agents can perform poorly or completely backfire for sophisticated agents and vice versa. This highlights the importance of sophistication as a factor that regulators should consider when deploying policies to fight misinformation.