Picture: THINKSTOCK
Picture: THINKSTOCK

While social media is a common method of communication, it is also largely unregulated. The negative consequence is that sensitive, illegal or objectionable content is also posted on such platforms, which have become the unwitting vehicles with which to disseminate abuse and propaganda.

The UK house of commons home affairs committee published a report in May 2017 entitled Hate Crime: Abuse, Hate and Extremism Online, which criticised social media giants YouTube, Facebook and Twitter for their failure to appropriately address hate speech. With the growing monetisation of social media through advertising revenue, there is also potential for both the platform and extremists to profit from the publication of hate speech online.

Each social media platform has acceptable-use policies or user community guidelines, which prohibit objectionable content including hate speech. These rules attempt to regulate such content by authorising its removal. Yet, as social media platforms are reliant on a "peer-review system" of reporting or flagging objectionable content by other users, its "removal" halts its perpetuation, rather than prevents its publication. When such content is reported, it is reviewed by the platform and if necessary, removed.

This approach is proving to be inadequate as it cannot (nor should it) actively analyse postings of all users on the platform. The delay between publication of the inappropriate content, reporting by another user, the review and its ultimate deletion, however, means that the harm has often been done prior to the removal of the content. Consider, for example, Donald Trump’s infamous Facebook video in which he proposed the barring of Muslims from entering the US, which remained on Facebook despite its violation of Facebook’s user policies.

The issue here, however, is not one of censorship. Revenue on social media is derived from advertising, not from account registration. Site traffic and driving traffic matter. It matters for advertisers and for social media company revenue. Technology enables companies to target specific demographics with adverts that "follow" users based on their information, preferences and search strings, resulting in brands inadvertently showing up alongside questionable content.

Facebook and YouTube have recently been criticised for failing to prevent campaigns by, for example, Nissan, L’Oréal and Sainsbury, from appearing alongside videos amounting to hate speech. Apart from brand reputational risk, this has the unintended effect of the platform deriving revenue from, and assisting, extremists in their publication of hate speech.

While traditional broadcasting is universally held to strict regulation, social media platforms are not, nor arguably, can, or should they be. User generated content changes the rules of the game and the early day debates on regulation have moved on. Yet, a balance is required. The UK proposals suggest that social media platforms need to meet a high public interest and safety standard and should attract liability for the failure of such platforms to expeditiously remove content propagating hate speech.

The EU’s code of conduct requires social media companies to review complaints within 24 hours and remove content where necessary, although there is no penalty for failure to do so. The German justice ministry has proposed that social media companies publish quarterly reports on complaints and fines of up to €50m for failure to comply with the code plus fines of up to €5m for employees personally tasked with handling complaints who fail to do so.

Although far from passing constitutional muster, the Draft Prevention and Combating of Hate Crimes and Hate Speech Bill aims to prevent hate speech in SA, and criminalises the intentional communication (including electronic communication) of hatred, threats, abuse or incitement to do harm or violence, based on 17 protected grounds.

The bill’s excessively broad ambit, which includes the "making available" of such communication, is sufficient to attract liability for social media platforms (and every other conceivable communication network and provider) but it will require considerable refinement to become useful legislation.

Notwithstanding challenges in defining hate speech and balancing constitutional rights and freedoms, growing calls for liability for social media platforms articulate their responsibility to protect users from such content. At the very least, these reforms suggest that the platforms should certainly not be profiting from a failure to do so.

Cohen and Van Breda are from Cliffe Dekker Hofmeyr

Please sign in or register to comment.