Facebook’s artificial intelligence still too dim to weed out abuse
Huge challenges remain years after the company started trying to police its platform with computers
Facebook faces a huge challenge: how can its 35,000 moderators watch over billions of posts and comments every day to sift out abusive and dangerous content?
Just 18 months ago, Mark Zuckerberg, Facebook’s founder, was confident that rapid advances in artificial intelligence (AI) would solve the problem. Computers would spot and stop bullying, hate speech and other violations of Facebook’s policies before they could spread.