San Francisco — Facebook said on Wednesday that company moderators during the last quarter removed 8.7-million user images of child nudity with the help of previously undisclosed software that automatically flags such photos. The machine learning tool rolled out over the last year identifies images that contain both nudity and a child, allowing increased enforcement of Facebook’s ban on photos that show minors in a sexualised context. A similar system also disclosed on Wednesday catches users engaged in “grooming”, or befriending minors for sexual exploitation. Facebook’s global head of safety Antigone Davis said in an interview that the “machine helps us prioritise” and “more efficiently queue” problematic content for the company’s trained team of reviewers. The company is exploring applying the same technology to its Instagram app. Under pressure from regulators and legislators, Facebook has vowed to speed up removal of extremist and illicit material. Machine learning programs tha...

BL Premium

This article is reserved for our subscribers.

A subscription helps you enjoy the best of our business content every day along with benefits such as exclusive Financial Times articles, Morningstar financial data, and digital access to the Sunday Times and Times Select.

Already subscribed? Simply sign in below.



Questions or problems? Email helpdesk@businesslive.co.za or call 0860 52 52 00.