Nick Hedley Facebook, the social media giant grappling with reputational damage after a major data breach, is stepping up its fight against hate speech and other offensive content. The company plans to double the size of its safety and content review team in 2018 and also wants to train its artificial intelligence (AI) systems to pick up more bad content before users do, Alex Schultz, Facebook’s vice-president of data analytics, told Business Day. The Nasdaq-listed firm said in May that it would grow the teams focused on safety, security and content reviews from 10,000 to 20,000 people in 2018. The company still relies heavily on these teams to review users’ reports of hate speech, though its machine-learning systems now pick up most posts containing graphic violence, nudity, terrorist propaganda, fake accounts and spam before users see them. "In the past five years the importance of artificial intelligence has gone up a lot ... none of our systems are able to find 100% of [inapprop...
Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.
Subscribe now to unlock this article.
Support BusinessLIVE’s award-winning journalism for R129 per month (digital access only).
There’s never been a more important time to support independent journalism in SA. Our subscription packages now offer an ad-free experience for readers.
Cancel anytime.
Questions? Email helpdesk@businesslive.co.za or call 0860 52 52 00. Got a subscription voucher? Redeem it now.