“Learning in a Post-Truth World”

Teaser: In a world with no misinformation, it is perhaps unsurprising that more sophisticated agents are better at learning than naive agents. In this paper, we reveal the rather shocking conclusion that once misinformation is introduced, the learning mechanism of sophisticated agents unravels quicker than the learning mechanism of naives. In many settings (e.g., polarized populations), sophisticated agents (rationally) uphold their (incorrect) beliefs and fail to learn more often than their naive counterparts.

Abstract: Misinformation has emerged as a major societal challenge in the wake of the 2016 U.S. elections, Brexit, and the COVID-19 pandemic. One of the most active areas of inquiry into misinformation examines how the cognitive sophistication of people impacts their ability to fall for misleading content. In this paper, we capture sophistication by studying how misinformation affects the two canonical models of the social learning literature: sophisticated (Bayesian) and naive (DeGroot) learning. We show that sophisticated agents can be more likely to fall for misinformation. Our model helps explain several experimental and empirical facts from cognitive science, psychology, and the social sciences. It also shows that the intuitions developed in a vast social learning literature should be approached with caution when making policy decisions in the presence of misinformation. We conclude by discussing the relation- ship between misinformation and increased partisanship, and provide an example of how our model can inform the actions of policymakers trying to contain the spread of misinformation.