subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now
Picture: 123RF
Picture: 123RF

Generative AI has exploded into the public consciousness, and into widespread use, with the emergence of language processing tools or large language models (LLMs) such as ChatGPT. The objective is to mimic human-generated content so precisely that the artificially generated content can be indistinguishable.

This is achieved by assimilating and analysing original content on which the tool has been trained, as supplemented by further learnings from prompts and its own generated content — essentially by learning patterns and relationships between words and phrases in natural language to repetitively predict the likeliest next word in a string based on what it has already seen and continues these predictions until its answer is complete.

A curious feature of LLMs is that they sometimes produce false and even damaging output. Instances of lawyers including fictitious AI-generated case law in their submissions to court are already well known, but LLMs can and do go further. Outputs can generate false and defamatory content that has the potential to cause a person actual reputational damage. This can even include fabricating nonexistent “quotes” purportedly from newspaper articles.

This tendency to make things up is referred to as hallucination, and some experts regard it as a problem inherent in the mismatch between the way generative AI functions and the uses to which it is put. For the time being, at least, it is a persistent feature of generative AI. This inevitably raises the question of where legal liability rests when LLMs generate false and harmful content.

In the US much of the debate has centred on whether the creator of the LLM — such as OpenAI in the case of ChatGPT — can be held liable in light of the statutory protection afforded to the hosting of the online content of other content providers under the US Code 230, though it appears that the generally held view is that generative AI tools do not fall within the protection afforded under this law, as it generates new code and does not merely host third party content.

In the EU the European Commission’s proposed AI liability directive, now still in draft form, will work in conjunction with the EU AI Act and make it easier for anyone injured by AI-related products or services to bring civil liability claims against AI developers and users. The EU AI Act, also now in draft form, proposes the regulation of the use and development of AI through the adoption of a “risk-based” approach that imposes significant restrictions on the development and use of “high-risk” AI.

Though the current draft of the Act does not criminalise contravention of its provisions, it empowers authorised bodies to impose administrative fines of up to €20m or 4% of an offending company’s total worldwide annual turnover, for noncompliance of a particular AI system with any requirements or obligations under the Act.

In the UK a government white paper on AI regulation recognises the need to consider which actors should be liable, but goes on to say that it is “too soon to make decisions about liability as it is a complex, rapidly evolving issue”.

The position in SA is governed by the common law pertaining to personality injury. The creator of the LLM would presumably be viewed as a media defendant, meaning a lower level of animus iniuriandi (negligence) would be required to establish a defamation claim than if the defendant were a private individual. What would constitute negligence in the case of a creator of an LLM that is known to hallucinate is an open question, which may depend on whether reasonable measures to eliminate or mitigate the known risks could have been put in place by the creator.

What is clear is that disclaimers stressing the risk that the output of the LLM will contain errors — which AI programmes often contain — would not immunise AI owners from liability, because they could at most operate as between the AI company and the user but would not bind the defamed person.

However, on a practical level the potential liability of the AI creator would be of less importance to an SA plaintiff because the creator would have to be sued in the jurisdiction where it is located (except in the unlikely event that it had assets in SA capable of attachment to found jurisdiction), rendering such claims prohibitive. 

The potential liability of the user of the LLM who then republishes the defamatory AI-generated output, is another matter. First, it is no defence to a defamation action to say you were merely repeating someone else’s statement. Second, the level of animus iniuriandi required would depend on the identity of the defendant.

If the defendant was a media company — for example, an entity that uses AI to aggregate and summarise news content — only negligence would be required, and that might consist of relying on an LLM known to hallucinate without putting the necessary steps in place to catch false and harmful output.

If on the other hand the defendant was a private individual using the AI to generate text, the usual standard of intent would apply, which would obviously make a claim far harder to establish. Intent, however, includes recklessness. It remains to be seen whether our courts would consider it reckless to repeat a defamatory AI-generated statement in light of the caveats AI creators have published against the use of their AI tools.

For example, OpenAI has provided users with a number of warnings that ChatGPT “can occasionally produce incorrect answers” and “may also occasionally produce harmful instructions or biased content”. It remains to be seen what approach the SA courts will adopt regarding false and defamatory AI-generated content.

We anticipate that in dealing with these questions they will have to engage with questions of public policy, such as balancing the competing interests of reputational rights with not imposing undue burdens on innovation and the use of new technologies.

As LLMs are increasingly integrated into larger platforms such as search engines, so their content will be published more widely and the risk of reputational harm to individuals referred to will increase. This area of delictual and product-related liability can be expected to develop rapidly in the coming while.

• Bhagattjee is head of technology & innovation, and Burger director, at Werksmans Attorneys.

subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.