subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now
Picture: DADO RUVIC/REUTERS
Picture: DADO RUVIC/REUTERS

A Pietermaritzburg law firm has found itself in trouble with the law, while also potentially facing a Legal Practice Council  investigation, after it was revealed that it used AI to compile papers lodged with a court.

Surendra Singh & Associates had been representing the suspended mayor of the Umvoti local municipality, Philani Mavundla, for an application for leave to appeal an earlier judgment that dismissed an interdict he was previously awarded against the municipality for suspending him during a council meeting.

In the judgment handed down in January this year, the presiding judge shared startling findings about the law firm’s submissions. She found that the written submissions and arguments made by Mavundla’s counsel cited nine cases, but only two of them could be found in official law reports, and only one was cited correctly.

When the judge asked the counsel for copies of the cited cases, the counsel claimed the papers were drawn up by the firm’s clerk. When brought into court, the clerk explained that she obtained the cases from law journals through her Unisa portal. When pressed on which law journals the cases were sourced from, the clerk was unable to provide an answer. 

Following this, the judge ordered the law firm to go to the court library or search through law journals such as the SA Law Reports and SAFLII, but the firm was unable to produce the original cases, leaving the court to conclude that the clerk had used an AI application such as a ChatGPT for the information. 

The leave to appeal application was subsequently dismissed, with the law firm ordered to carry the costs, while the judge directed that her judgment be sent to the Legal Practice Council for further action.

This incident is just one of many case studies where there has been gross and unethical misuse of AI. While applications such as ChatGPT, Grok and Microsoft Copilot had given us an added layer of convenience, the world is at a crossroads in the further development and use of AI.

How much more can we leverage it before we reach a point where it becomes unethical, and perhaps even criminal? The rise of deepfakes is just one of the dangers this technology presents if left unabated.

Technology remains an empowering tool, not just for personal use but for the betterment of humankind. Its intended purpose was always designed for good but has become contaminated by human desires for instant gratification.

An international agreement signed at the AI global summit in Paris in early February was a good first step, with the statement pledging an “open”, “inclusive” and “ethical” approach to the technology’s development. While there has been much debate and caution concerning the overregulation of AI, there is a need for boundaries, especially in instances where it is used incorrectly in court papers. This kind of unethical use only erodes the technology’s credibility.

The question of ethics has been at the centre of the debate about the mainstream, public use of AI. While some people have been wary, others have embraced it to the extreme, often to their own detriment. Many public access platforms, such as ChatGPT, remain in experimental stages and cannot be used as an exclusive source of information. Neither attitude is practical. All good things must be consumed in moderation. AI exists to assist us, not to replace us.

This underscores the need for more focus on AI being used for social good, especially in the upliftment and development of communities in need. A case in point is the proliferation of AI-driven chatbots NGOs use to collect, store and analyse data from the work they do in the field.

Think about how data on child nutrition, education and wellbeing in underserved communities has been limited due to logistical challenges, resource constraints and accessibility challenges.

Today, NGO field workers can use AI chatbots on WhatsApp to input the data they collect, where it is stored in the cloud. AI analytics and predictive modelling can then crunch this raw data and output notable trends and other key information. 

From here, these organised data sets enable NGOs to gain valuable insights into conditions on the ground, while policymakers can make evidence-based decisions on the reallocation of resources.

AI technology changes the game, but for the good of humankind. A WhatsApp chatbot can ensure that cash- and resource-strapped NGOs are able to address data gaps of society. 

Without a sustainable data governance framework, the world will continue to lose out on key opportunities to identify trends and make predictions. Because our world is developing at an ever-faster pace, there is a need for more predictability. While there are many positives to the data age, without a structured framework for sufficient data collection and analysis, humankind will get lost in technology that becomes unreliable, in the same way that Surendra Singh & Associates ended up using faulty information.

The future development of AI requires human guidance, supported by the right data. We are at the crossroads. Let us not waste the opportunities that lie before us.

• Steenkamp is cofounder at Tregter.

subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.