Nick Hedley Facebook, the social media giant grappling with reputational damage after a major data breach, is stepping up its fight against hate speech and other offensive content. The company plans to double the size of its safety and content review team in 2018 and also wants to train its artificial intelligence (AI) systems to pick up more bad content before users do, Alex Schultz, Facebook’s vice-president of data analytics, told Business Day. The Nasdaq-listed firm said in May that it would grow the teams focused on safety, security and content reviews from 10,000 to 20,000 people in 2018. The company still relies heavily on these teams to review users’ reports of hate speech, though its machine-learning systems now pick up most posts containing graphic violence, nudity, terrorist propaganda, fake accounts and spam before users see them. "In the past five years the importance of artificial intelligence has gone up a lot ... none of our systems are able to find 100% of [inapprop...

BL Premium

This article is reserved for our subscribers.

A subscription helps you enjoy the best of our business content every day along with benefits such as exclusive Financial Times articles, Morningstar financial data, and digital access to the Sunday Times and Times Select.

Already subscribed? Simply sign in below.

Questions or problems? Email or call 0860 52 52 00.