Social Media Bubbles

Bursting the (News) Bubble

By Casey Moffitt and Linsey Maughan

Ever wonder why Facebook recommends news stories that seem tailored to you? Researchers at Illinois Institute of Technology are studying how the algorithms that offer those recommendations can create filter bubbles, exposing readers to stories that skew toward their preconceived political views.

Learning how these algorithms work and how they affect readers is a crucial first step in understanding the consequences that can occur as people are drawn into these bubbles. Mustafa Bilgic, associate professor of computer science at Illinois Tech, and Matthew Shapiro, professor of political science, and their collaborators recently published a paper in Proceedings of the Web Conference 2021 about their research titled “The Interaction Between Political Typology and Filter Bubbles in News Filter Algorithms.” The National Science Foundation provided funding support.

To conduct their study, the researchers gathered and curated a collection of more than 900,000 news articles and opinion essays from 41 sources annotated by topic and partisan lean. A simulation investigated how different algorithmic strategies affect filter bubble formation. Drawing on Pew Research Center studies of political typologies, heterogeneous effects based on the user’s pre-existing preferences were identified.

“If the algorithm shows you only the news that it thinks you are going to like, to maximize its chances that you will click on it, you may not know that these other perspectives and these other news even exist,” says Bilgic, who served as the principal investigator on the project. “And because all of it is done behind the scenes, you probably wouldn’t notice that automated algorithms are filtering and selecting news for you.”

The algorithm used collaborative filtering, which makes recommendations based on the preferences of people with similar views, and content filtering, which makes recommendations based on the content in the articles. The types of filtering created by these algorithms were very different. Content-based filters rely on partisan language used in the articles to make recommendations. Collaborative filters make recommendations based on popular articles read by a group that best match a specific reader’s preferences.

The study shows that content-based recommendations are susceptible to biases based on distinctive partisan language used on a given topic, leading to over-recommendation of the most polarizing topics. Collaborative filtering recommenders, on the other hand, are susceptible to the majority opinion of users, leading to the most popular topics being recommended regardless of user preferences.