Bengaluru/San Francisco — On Monday, Alphabet’s YouTube said it planned to add more people to identify inappropriate content in 2018, as the company responded to criticism over extremist, violent and disturbing videos and comments. YouTube has developed automated software to identify videos linked to extremism and now is aiming to do the same with clips that portray hate speech or are unsuitable for children. Uploaders whose videos are flagged by the software may be ineligible for generating advertising revenue. But amid stepped-up enforcement, the company has received complaints from video uploaders that the software is error-prone. Adding to the thousands of existing content reviewers will give YouTube more data to supply and possibly improve its machine learning software. The goal is to bring the total number of people across Google working to address content that might violate its policies to moer than 10,000 in 2018, YouTube CEO Susan Wojcicki said in one of a pair of blog post...

Subscribe now to unlock this article.

Support BusinessLIVE’s award-winning journalism for R129 per month (digital access only).

There’s never been a more important time to support independent journalism in SA. Our subscription packages now offer an ad-free experience for readers.

Cancel anytime.

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.