subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now
In many cases, social media algorithms have amplified misleading content, contributing to a dangerous cycle of outrage, engagement and redistribution. Picture: 123RF
In many cases, social media algorithms have amplified misleading content, contributing to a dangerous cycle of outrage, engagement and redistribution. Picture: 123RF

US senator Michael Bennet this week sought information on how tech giants Meta, X, TikTok and Google are trying to stop the spread of false and misleading content about the Israel-Hamas conflict on their platforms.

“Deceptive content has ricocheted across social media sites since the conflict began, sometimes receiving millions of views,” Bennet, a Democrat, said in the letter addressed to the company CEOs.

Visuals from older conflicts, video game footage and altered documents are among misleading content that has flooded social media platforms since Hamas militants attacked Israeli civilians on October 7.

“In many cases, your platforms’ algorithms have amplified this content, contributing to a dangerous cycle of outrage, engagement and redistribution,” Bennet said.

The senator’s comments come after EU industry chief Thierry Breton blasted the companies, demanding they take stricter steps to battle disinformation amid the escalating conflict.

In his letter, Bennet posed a series of questions to the companies seeking details on their content moderation practices and sought answers by October 31.

The social media firms have outlined some steps they have taken in recent days in response to the conflict. Short video app TikTok said it has hired more Arabic and Hebrew-speaking content moderators.

Meta, which owns Facebook and Instagram, said it removed or marked as disturbing more than 795,000 pieces of content in Hebrew or Arabic in the first three days since the Hamas attack. X and Google-owned YouTube both said they have taken down harmful content.

But Bennet said those actions are not enough. “The mountain of false content clearly demonstrates that your current policies and protocols are inadequate,” he wrote in the letter.

Bennet also slammed the four companies for having laid off in the past year staff from their trust and safety teams who were in charge of monitoring for false and misleading content.

Twitter shelved 15% of its trust and safety staff and dissolved a related council in November 2022 after Elon Musk acquired the company, cutting more staff in September, Bennet noted. Meta reduced 100 similar positions in January, while Google reduced by a third a team creating tools to counter online hate speech and disinformation, Bennet said.

“These decisions contribute to a cascade of violence, paranoia and distrust around the world,” he said. “Your platforms are helping produce an information ecosystem in which basic facts are increasingly in dispute, while untrustworthy sources are repeatedly designated as authoritative.”

Reuters

subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.