San Francisco — Facebook said on Wednesday that company moderators during the last quarter removed 8.7-million user images of child nudity with the help of previously undisclosed software that automatically flags such photos. The machine learning tool rolled out over the last year identifies images that contain both nudity and a child, allowing increased enforcement of Facebook’s ban on photos that show minors in a sexualised context. A similar system also disclosed on Wednesday catches users engaged in “grooming”, or befriending minors for sexual exploitation. Facebook’s global head of safety Antigone Davis said in an interview that the “machine helps us prioritise” and “more efficiently queue” problematic content for the company’s trained team of reviewers. The company is exploring applying the same technology to its Instagram app. Under pressure from regulators and legislators, Facebook has vowed to speed up removal of extremist and illicit material. Machine learning programs tha...

Subscribe now to unlock this article.

Support BusinessLIVE’s award-winning journalism for R129 per month (digital access only).

There’s never been a more important time to support independent journalism in SA. Our subscription packages now offer an ad-free experience for readers.

Cancel anytime.

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.