Facebook to double teams monitoring offensive content
Facebook, the social media giant grappling with reputational damage after a major data breach, is stepping up its fight against hate speech and other offensive content.
The company plans to double the size of its safety and content review team in 2018 and also wants to train its artificial intelligence (AI) systems to pick up more bad content before users do, Alex Schultz, Facebook’s vice-president of data analytics, told Business Day.
The Nasdaq-listed firm said in May that it would grow the teams focused on safety, security and content reviews from 10,000 to 20,000 people in 2018.
The company still relies heavily on these teams to review users’ reports of hate speech, though its machine-learning systems now pick up most posts containing graphic violence, nudity, terrorist propaganda, fake accounts and spam before users see them. "In the past five years the importance of artificial intelligence has gone up a lot ... none of our systems are able to find 100% of [inappropriate] content today before it’s seen by users, but that’s the aspiration," Schultz said.
Facebook said in a report that of every 10,000 pieces of content viewed in the first quarter of 2018, about 22 to 27 posts contained graphic violence, up from 16 to 19 in the fourth quarter of 2017.
The network had to remove or place a warning in front of 3.4-million pieces of content containing graphic violence in the first quarter, nearly triple the 1.2-million items in the previous three months.
Schultz said that though Facebook could not say for sure why the number of graphic violence posts rose, it believed the war in Syria was a likely explanation.
He said that in the first quarter, "our artificial intelligence systems got better, so we found more of it" before users did.
AI picked up 86% of all graphic violence posts before people flagged them in the first quarter, from 72% in the prior quarter. Meanwhile, AI identified 96% of posts containing nudity and sexual activity, 99.5% of terrorist propaganda, about 99% of fake accounts and 99.7% of spam.
However, AI was able to pick up only 38% of hate speech, up from 24% in the previous quarter, as these systems find it more difficult to identify offensive language than inappropriate images, Schultz said.
Images containing symbols of terrorist groups Islamic State, al-Qaeda or Boko Haram, for instance, can easily be identified with technology almost as soon as they are posted.
It’s a question of when we talk to the partners and want to hire 500 people with these kinds of language skills or capabilities, where can we best do that?Richard Allan
Facebook’s vice-president of public policy for Europe, the Middle East and Africa
"But with text, there’s a lot more nuance in the language…. Context is incredibly important and you can see how certain words in certain countries have different meanings than in other countries. And certain words used when someone is reclaiming them as a slur against their ethnic group are okay, but if they’re used by someone not from that group they’re not okay." For the time being, Facebook will rely largely on human reviews to identify hate speech. The company hires people and then trains them in its community standards.
It will probably house its growing review team in "a small number of [large] centres", said Richard Allan, Facebook’s vice-president of public policy for Europe, the Middle East and Africa.
"We don’t think we’d get the same quality if we had hundreds of much smaller groups of people.
"They will be a mixture of Facebook directly employed staff in places like Dublin, Austin, Texas, and so on, and a number of outsource centres using very reputable outsourcing partners."
Allan said Facebook would talk to outsourcing partners about the best locations for these centres.
"It’s a question of when we talk to the partners and want to hire 500 people with these kinds of language skills or capabilities, where can we best do that?" he said. Schultz said Facebook was making progress in its war against malicious and inappropriate content.
"In 2016, we focused on this area and increased our investment, and I think we’ve made a lot of progress, but there are a lot of areas where I’d like to see us do better.
"Our top priority areas are things like child-exploitation imagery, global terrorism and nonconsensual intimate imagery like revenge porn," he said. Facebook’s report suggests that the prevalence of hate speech — a common issue in racially divided SA — is on the rise.
The content it took action on rose 56% to 2.5-million posts in the first quarter. That was partly explained by improvements in Facebook’s detection methods and "real-world events" that gave rise to more hate speech, the report says.
However, Allan said while many reports were warranted, many Facebook users abused the hate speech reporting tool.
"Sometimes they will also report things that they just don’t like. People are people and a lot of the stuff may be supporters of one football team who will report content involving another football team as hate speech. Clearly it’s not."
SA’s portfolio committee on justice and correctional services this week received a briefing on the Hate Crimes and Hate Speech Bill, which aims to criminalise hate crimes and speech.
This comes as SA sees "increasing intolerance", committee chairman Mathole Motshekga said in a statement.