Abstract: Social media platforms develop recommendation algorithms to optimize objectives that are often distinct from the users’ or society’s objectives. For example, the platform may be incentivized to hook the user for as long as possible in order to maximize advertising revenue. As a result, such algorithms can have socially undesirable consequences, including algorithms that promote viral misinformation, catchy titles, or one-sided perspectives. We propose a novel experiment where users interact with a news feed that resembles a social media platform like Facebook under various intuitive ranking algorithms. Participants are randomly assigned to one of four groups, each with a unique algorithm for ranking content. Those in the control group receive a randomized feed, whereas those in one of the three treatment groups are either assigned to a preference-based feed, a friend-based feed, or a combination of both. We compare user engagement across all four groups to understand the impacts of ranking algorithms to better understand platform incentives to recommend more exploitative content. Finally, we suggest interventions that better align platform behavior with societal objectives.