Algorithms Really Create Political Polarization, and This AI Tool Allows Users to Avoid It
Researchers used a browser extension to rearrange people’s X-feeds, reducing their polarizing effect
By Simon Makin edited by Sarah Lewin Frasier

Thomas Fuchs
People often blame social media algorithms that prioritize extreme content to increase political polarization, but this effect has been difficult to prove. Only the owners of the platforms have access to their algorithms, so researchers cannot identify possible changes in product behavior without the (increasingly rare) cooperation of the platforms.
A study In Science not only provides compelling evidence that these algorithms cause polarization, but also shows that the trend can be mitigated without gaining platform approval or removing posts.
Researchers created a browser extension that could scroll down or up posts in users’ X-feeds that display attitudes related to polarization, such as partisan animosity and support for undemocratic practices. The tool uses a large language model (LLM) to analyze and reorganize posts in real time.
On supporting science journalism
If you enjoy this article, please consider supporting our award-winning journalism by subscribe. By purchasing a subscription, you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
“Only platforms have had the power to shape and understand these algorithms,” says Martin Saveski, study co-author and information scientist at the University of Washington. “This tool gives that power to independent researchers.”
The team ran an experiment for 10 days in the run-up to the 2024 US election. More than 1,200 volunteer participants saw feeds in which polarizing content was either ranked significantly lower, reducing the chances that users would see it before stopping scrolling, or ranked slightly higher.
Regardless of political orientation, those for whom polarizing messages were minimized felt warmer toward the group that opposed their views (based on short surveys) than those whose feeds were unchanged, while those who saw polarizing messages reinforced felt colder.
The difference was two to three degrees on a 100-degree “feeling thermometer.” That may seem small, but “it’s comparable to three years of historical change on average in the United States,” says Chenyan Jia, co-author and communications researcher at Northeastern University. The manipulations also affected the amount of sadness and anger participants reported feeling while scrolling.
According to psychologist Victoria Oldemburgo de Mello of the University of Toronto, who studies how technology shapes behavior and society, the study authors impressively combined tight control with a real-world environment. “And they do it in a clever way that circumvents [platform] approval. Nobody has done this before. How long the effects persist is unclear: they could dissipate or worsen over time, she adds. The researchers say this is an important direction for future work and have made their code freely available so other scientists can access it as well.
The current version of the tool only works for browser-based social media sites. Creating something that can be used with apps is “technically more difficult, given how [they] work, but it’s something we’re exploring,” Saveski says.
The researchers also plan to study other interventions for social media feeds, taking advantage of the flexibility offered by LLM analysis, Saveski adds. “Our framework is very general and we can think about well-being, mental health, etc. »
It’s time to defend science
If you enjoyed this article, I would like to ask for your support. Scientific American has been defending science and industry for 180 years, and we are currently experiencing perhaps the most critical moment in these two centuries of history.
I was a Scientific American subscriber since the age of 12, and it helped shape the way I see the world. SciAm always educates and delights me, and inspires a sense of respect for our vast and magnificent universe. I hope this is the case for you too.
If you subscribe to Scientific Americanyou help ensure our coverage centers on meaningful research and discoveries; that we have the resources to account for decisions that threaten laboratories across the United States; and that we support budding and working scientists at a time when the value of science itself too often goes unrecognized.
In exchange, you receive essential information, captivating podcastsbrilliant infographics, newsletters not to be missedunmissable videos, stimulating gamesand the best writings and reports from the scientific world. You can even give someone a subscription.
There has never been a more important time for us to stand up and show why science matters. I hope you will support us in this mission.



























