subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now
‘The content is very graphic, the messaging is extreme.’ Picture: 123RF/PROMESAARTSTUDIO
‘The content is very graphic, the messaging is extreme.’ Picture: 123RF/PROMESAARTSTUDIO

Bangkok/London/Beirut — Hours after Hamas militants attacked Israel on October 7, Bharat Nayak, a fact-checker in the east Indian state of Jharkhand, noticed a surge of disinformation and hate speech directed at Muslims on his dashboard of WhatsApp messages.

The viral messages from hundreds of public WhatsApp groups in India contained graphic images and videos, including many from Syria and Afghanistan falsely labelled as being from Israel, with captions in Hindi that called Muslims evil.

“They are using the crisis to spread misinformation against Muslims, saying they will attack Hindus in a similar way, and to falsely accuse opposition parties and others of supporting Hamas, and calling for their elimination,” Nayak said.

“The content is very graphic, the messaging is extreme, and it gets forwarded many times as there is no content moderation on WhatsApp” he said.

The volume of disinformation and hate speech underlines the failure of social media platforms to boost content moderation, particularly in languages other than English.

The conflict, which has killed more than 1,400 people in Israel and over 8,000 in the Gaza Strip, has triggered a surge in disinformation and hate speech against Muslims and Jews across social media platforms from India to China to the US.

Meta and X said they have removed tens of thousands of posts, but the volume of disinformation and hate speech underlines the failure of social media platforms to boost content moderation, particularly in languages other than English, say digital rights experts.

“We’ve tirelessly drawn their attention to these issues over the years, but social media platforms continue to fall short when it comes to combating hate speech, incitement and disinformation,” said Mona Shtaya, a non-resident fellow at The Tahrir Institute for Middle East Policy, a nonprofit.

“The recent layoffs in trust and safety teams across platforms underscore this deficiency,” she said. “Additionally, their resource allocation — based on market size, rather than assessed risks — exacerbates the challenges faced by marginalised communities, including Palestinians and others.”

In a blog post, Meta — which owns Facebook, Instagram and WhatsApp — said it “quickly established a special operations centre staffed with experts, including fluent Hebrew and Arabic speakers”, and it is working with third-party fact-checkers in the region “to debunk false claims”.

X did not respond to a request for comment.

Failures of content moderation are not limited to the decades-long Israel-Palestine conflict.

UN human rights investigators said in 2018 that the use of Facebook played a key role in spreading hate speech that fuelled violence against the ethnic Rohingya community in Myanmar in 2017.

Rohingya refugees in 2021 sued Meta for $150bn over allegations that the company’s failures to police content, and its platform’s design contributed to the real-world violence. Meta has acknowledged being “too slow” to act in Myanmar.

In 2022, a lawsuit against Meta filed in Kenya accused the platform of allowing violent and hateful posts from Ethiopia on Facebook, and its recommendation systems of amplifying violent posts that inflamed the Ethiopian civil war.

The company has faced similar accusations related to violence in Sri Lanka, India, Indonesia and Cambodia.

The surge in disinformation during the current Israel-Hamas conflict underscores that “platforms do not have the right systems in place”, said Sabhanaz Rashid Diya, former head of policy at Meta for Bangladesh.

Underinvestment

“The historical underinvestment in specific parts of the world and specific languages is now being tested in this crisis,” said Diya, founding board director of Tech Global Institute, a think-tank.

“Some of the challenges we’re seeing around the information ecosystem are consequences of not building capacity; these are consequences of automated systems, staffing issues; not having sufficient fact-checkers in these markets; not having policies that are contextualised for local regions,” she said.

The Arab Centre for Social Media Advancement, or 7amleh, has documented more than 500,000 instances in Hebrew of hate speech and incitement to violence against Palestinians and their supporters.

There is also a more than 50-fold increase in the absolute volume of anti-Semitic comments on YouTube videos, the Institute for Strategic Dialogue in London said in a report this week.

State-affiliated accounts of Iran, Russia and China are also spreading disinformation and hate speech on Facebook and X, it said, adding that it could contribute to “polarisation and deepening mistrust towards democratic institutions and the media”.

Reports of anti-Semitic and Islamophobic incidents have surged worldwide, including assaults, vandalism and the fatal stabbing of a six-year-old Palestinian boy in the US.

They are a result of hate speech online, said Marc Owen Jones, who researches disinformation in the Middle East.

“Much of the disinformation is violent, graphic and highly emotive — designed to provoke polarisation and turn people against each other,” said Jones, an associate professor at Hamad bin Khalifa University in Qatar.

It is “driving a sense of righteousness and tribalism that contributes to violence, as we’ve seen as far away as Dagestan and Illinois. The upshot is dire,” said Jones.

Despite heated conversations around the need for better content moderation, trust and safety is “resource intensive, meaning that tackling the issue is a challenge for any platform”, said Yu-Lan Scholliers, head of product at Checkstep, a UK-based content moderation services firm.

With easy access to artificial intelligence (AI), “it’s now much easier to generate real-looking but fake content, requiring more advanced detection mechanisms”, said Scholliers, who previously worked in Meta’s product data science team.

But even if platforms invest heavily in their trust and safety teams, the main challenge “is and will be adversarial behaviour — users always find more and more creative ways to avoid detection”, she said. “It is a whack-a-mole that can never be fully solved.”

Thomson Reuters Foundation

subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.