subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now
ChatGPT may well become the “calculator for writing,” PICTURE: 123rf
ChatGPT may well become the “calculator for writing,” PICTURE: 123rf

There has been much discussion and hype about the promise of artificial intelligence (AI), but I think we can agree that AI is a disrupter technology, with the potential to improve our lives and the African continent drastically.

Take agriculture: farmers in Cameroon have been testing an AI-based app that can identify diseases in crops and suggest a course of treatment. From a financial perspective, banks in Nigeria have been using AI to enable the identification of previously undetected financial patterns and anomalies that can result in fraudulent transactions. AI also has great potential to help the cybersecurity industry accelerate the development of protection tools and validate some aspects of secure coding.

However, the introduction of this technology also has the potential for abuse and global harm. In November, OpenAI released a new AI model called ChatGPT (generative pretraining transformer). It interacts in a conversational way, enabling people to ask questions and receive answers. ChatGPT is extremely popular, garnering more than 1-million users in the five days after its launch. Many people used it to write poetry or create new recipes. News website Business Insider put the AI model through its paces by asking it to come up with a strategic plan that can help solve SA’s problems. However, by its own admission, ChatGPT doesn’t follow current events. This means it is unable to make an opinion or provide insights around contemporary socio-economic issues. 

Other users had more nefarious ideas. Researchers quickly discovered that it’s easy to use ChatGPT to create malicious emails and code that can be used to hack organisations. Weeks after its release it was being used for that exact purpose. Essentially, it is democratising hacking, enabling even novices to create malicious files and putting us all at risk. 

Why does this matter? The world experienced a 38% increase in cyberattacks in 2022 compared with 2021. The average organisation was attacked 1,168 times a week, while Africa experienced the highest volume of attacks anywhere with 1,875 weekly attacks per organisation. Education and health care were two of the most targeted industries — resulting in many hospitals and schools coming to a standstill. Doctors were unable to treat patients and schools sent sent children home. We may now see an exponential rise in cyberattacks due to ChatGPT and other generative AI models. 

Cracking the code

To its credit, OpenAI has invested much effort to stop abuse of its AI technology, writing, “while we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behaviour.” Unfortunately, ChatGPT struggles to stop the production of dangerous code.

To illustrate that point, researchers compared ChatGPT and Codex, another AI-based system that translates natural language to code. A full infection flow was created with the following restriction: no code to be written, and AI does all the work. The researchers assembled the pieces and when executed they found it could be used as a phishing email attack weaponised with a malicious Excel file containing macros that download a reverse shell. In other words, the attacker operates as the listener and the victim as the initiator of the attack.  

Taking it a step further, the team at Check Point Research recently uncovered instances of cybercriminals using ChatGPT to develop malicious tools. In some instances, these hackers relied entirely on AI for the development, while others simply used the AI to greatly reduce the time required to create malicious code. From malware to phishing to a script that can encrypt a victim’s machine automatically without any user interaction and the creation of an illicit marketplace, it is concerning to see how quickly cybercriminals are adopting ChatGPT and using it in the wild for their disruptive purposes. 

So, can ChatGPT be used for harm? Yes. Can ChatGPT be used to arm cybercriminals to shortcut phishing attacks? Yes. When should the community come together to discuss, debate, and determine a plan for thoughtful regulation? Now. 

In 2015 the SA government established the Cybersecurity Hub, which has subsequently matured into one of several National Computer Security Incident Response Teams. This forms part of efforts to strengthen the country’s cybersecurity stance and provides a solid platform to reinforce efforts that leverage advanced technologies like AI to combat the evolving threat landscape.

Just as many have advocated for the importance of diverse data and engineers in the AI industry, so must we bring in expertise from psychology, government, cybersecurity, and business to the conversation. Together, we can surely tackle this threat to public safety, critical infrastructure and our world. 

Bhula is regional director for Africa at Check Point Software. 

subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.