Teaser: Strong echo chambers on social media (whereby users tend to follow or friend others users of similar ideological belief), lead to more viral misinformation but rarely increase the virality of highly reliable content. As a result, we find that platform recommendation algorithms curate “endogenous” echo chambers that limit exposure to counter-attitudinal content, especially when this content is likely to contain misinformation. Some regulatory measures can mitigate these platform incentives and reduce the spread of misinformation, but if not designed carefully, can actually exacerbate it.
We present a model of online content sharing where agents sequentially observe an article and must decide whether to share it with others. This content may or may not contain misinformation. Agents gain utility from positive social media interactions but do not want to be called out for propagating misinformation. We characterize the (Bayesian-Nash) equilibria of this social media game and show sharing exhibits strategic complementarity. Our first main result establishes that the impact of homophily on content virality is non-monotone: homophily reduces the broader circulation of an article, but it creates echo chambers that impose less discipline on the sharing of low-reliability content. This insight underpins our second main result, which demonstrates that social media platforms interested in maximizing engagement tend to design their algorithms to create more homophilic communication patterns (“filter bubbles”). We show that platform incentives to amplify misinformation are particularly pronounced when there is greater polarization and more divisive content. Finally, we discuss various regulatory solutions to such platform-manufactured misinformation.