Home » Steve Rathje’s findings reveal Twitter’s algorithm is more inclined to support right-wing content that has gone viral

Steve Rathje’s findings reveal Twitter’s algorithm is more inclined to support right-wing content that has gone viral

by Ainsley Ingram

JAKARTA – An article on the Twitter blog revealed that Twitter’s algorithms more often favor right-wing content than left-wing content. However, the reason is still unclear. These results are taken from an internal study of the algorithmic amplification of political content on Twitter.

During the study, Twitter examined millions of tweets posted between April 1 and August 15, 2020. These tweets came from media outlets and elected officials in Canada, France, Germany, Japan, Spain, United Kingdom and United States.

In all of the countries surveyed except Germany, Twitter found that right-wing accounts “receive more algorithmic amplification than political leftists.” He also found that right-wing media content benefited from the same bias.

“Negative posts about outside political groups tend to elicit more engagement on Facebook and Twitter”

Twitter says it doesn’t know why the data shows its algorithm supports right-wing content, noting that this is a “much harder question to answer because it is a product of ‘interactions between people and platforms’.

However, that may not be the case with Twitter’s algorithm in particular according to Steve Rathje, a doctoral candidate who studies social media. He published the results of his research explaining how the content subgroup the policy is more likely to go viral.

“In our research, we also looked at the types of content amplified on social media and found a consistent trend: negative posts about political foreigners tend to elicit more engagement on Facebook and Twitter,” said Rathje. . The edge.

“In other words, if a Democrat comments negatively about a Republican (or vice versa), that type of content will generally receive more engagement,” Rathje added.

Given Rathje’s research, that could mean right-wing posts on Twitter spark more outrage and generate amplification. Perhaps Twitter’s algorithm problems have to do with promoting “toxic” tweets more than some political bias.

As mentioned earlier, Twitter research indicates that Germany is the only country that does not experience algorithm bias on the right. This could be linked to Germany’s agreement with Facebook, Twitter and Google to suppress hate speech within 24 hours. Some users even changed their country to Germany on Twitter to prevent Nazi images from appearing on the platform.

Twitter has been trying to change the way we tweet for some time now. In 2020, Twitter began testing a feature that alerts users when they are about to post a rude response, and only this year has Twitter started testing a post that pops up when you think you are embarking on a response. lively Twitter discussion.

These are signs of how much Twitter already knows about bullying and hate messages on their platform.

Frances Haugen, the whistleblower who leaked a number of internal Facebook documents, says Facebook’s algorithm supports hate speech and divisive content. Twitter could easily be in the same position but publicly share some internal data checks before any possible leaks.

Rathje points to another study that found moral outrage bolstered viral posts from a liberal and conservative perspective, but was more successful than content from conservatives.

He said that with regards to features such as algorithmic promotion leading to social media virality, “further research should be conducted to examine whether these features help explain the amplification of right-wing content on Twitter.”

If the platform delves into the problem and opens up access for other researchers, it might be better to tackle the content that divides at the heart of the problem.

Related Posts

Leave a Comment