Abstract: Recent calls for regulation of social media platforms argue that they serve as conduits of extremism. Several platforms have responded by banning communities that peddle extreme or misleading ideas. These communities are usually echo chambers, consisting of users with similar ideologies repeating the same information to each other. This amplifies harmful beliefs and makes them more likely to metamorphose into dangerous offline actions. We develop a novel community formation model to show that this traditional view of echo chambers is incomplete, and that they sometimes can even lead to an overall reduction of harmful sentiment on the platform. Our model offers a nuanced understanding of these community dynamics and how they shape the structure of optimal interventions in non-trivial ways. For example, policies that successfully contain extremism in the short run can be the exact same policies that sow the seeds for extremism in the long run and vice versa. We provide several such insights that platforms and policymakers can use as a starting point for developing effective interventions that reduce extremism and misinformation.