“When Should Platforms Break Echo Chambers?”

Abstract: Recent calls for regulation of social media platforms argue that they serve as conduits of misinformation and extremism. In response, platforms such as Reddit have taken actions to combat the spread of misinformation, such as by banning entire communities or by quarantining them to make their content less accessible. From a platform operations perspective, when are these interventions optimal in terms of reducing the spread of misleading ideas? Does banning communities help reduce the potential for incorrect or even harmful actions originating on the platform? In this paper, we build a model of how users join communities and how their beliefs evolve as a function of the beliefs of the users in the communities they participate in. At the center of our model is the observation that several of these problematic communities are echo chambers: they mostly consist of users who share similar ideologies repeating the same information to each other. As a result, (mis)information that agrees with the general sentiment of the community thrives and opposing views are shut down. We show that this view is rather reductive and that these echo chambers can sometimes be useful in containing misinformation on the platform. By breaking up a community, its members end up spending more time interacting in other communities, and as a result “infect” a broader set of the platform’s user base. Using Reddit’s interventions on r/The_Donald as a case study, we find that sentiment spillovers to adjacent communities (r/Conservative) indeed play a pivotal role. Our model’s predictions provide a framework for redesigning platform operations to most effectively curb the negative consequences of echo chambers on social media.