subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now
Picture: 123RF/andreypopov
Picture: 123RF/andreypopov

The recent release of the beta version of ChatGPT by OpenAI was met with such enthusiasm that the system is regularly unable to cope with user demand for its service. A “pro” version has since been introduced — at a fee.

The system has not only introduced a useful tool that people enjoy, but also opened or reintroduced a whole array of ethical issues and questions related to the use of artificial intelligence (AI).

By way of background, Chat GPT is a language processing system, also called a large language model, that can generate logically coherent and (in most cases) meaningful and well-written responses to questions posed online.

The “GPT” in the name stands for “generative pretrained transformer”, which indicates that the system was taught or trained by being fed enormous amounts of human-generated text that it processes when responding with written responses to questions posed to it. It is thus machine learning that enabled the system to respond intelligently and in full sentences to questions asked by users.

Over the last few weeks several testimonies related to the usefulness of ChatGPT have been published in mainline and social media. Probably the clearest indicator of the usefulness of the system is that it pushed Google to issue a red alert about the business continuity risk ChatGPT poses to the company. The fact that ChatGPT responds to questions with a well-tailored and unique answer, not just with links to websites where one must search for answers, makes the business risk to Google obvious.

Ethics and AI

The relationship between ethics, ChatGPT and AI in general is an interesting and intriguing one. ChatGPT can be a most useful companion for anyone interested in ethics. It can provide one with informative answers about, for example, what ethics is, the nature of moral dilemmas, the different sources and traditions of ethical principles, ethical decision-making models, the ethical pitfalls of AI, and a lot more.

Although the system claims it cannot make ethical decisions or resolve moral dilemmas, it nevertheless responded impressively when I asked it to advise me on what the best course of action would be in specific ethical dilemma scenarios. It also provided guidance on particular issues to consider when resolving a specific ethical dilemma. ChatGPT can therefore be a handy tool for gaining more insight into ethics.

Ethics of AI

Technology is without exception ethically ambiguous. It can be put to good use, but it can also be used to harm innocent victims. Online banking, for example, can make one’s life much easier, which is a good thing. It can, unfortunately, also be used to defraud online banking users, which is unethical and illegal. AI and specifically ChatGPT are no exceptions in this regard. They are also ethically ambiguous. When I asked ChatGPT about its ethical ambiguity, it was quick to admit that it was guilty as charged.

All AI, including ChatGPT, suffers from the potential danger of data bias. The quality and relevance of an AI-generated response crucially depend on the data the system has access to. Often the data AI systems use to compute answers to questions is biased in some or another way. It can be gender, racially or geographically biased. It can also be biased in terms of the selection of data the system is taught, the algorithms that are used, or the decision-making rules that are programmed into the system.

The data bias that is often built into AI systems can lead to unfair discrimination against people of a specific race, gender, sexual orientation, economic class or geographic region. When this happens the offending parties are usually quick to blame the system, but the system can only use the data it is exposed to, and developers of AI systems surely have a say in which data their systems can access and how the data will be processed.

A particular ethical problem that ChatGPT brings to the fore is plagiarism. Students, academics, journalists and a host of other professions in which the generation of written text is required can easily be tempted to ask this new chatbot to produce the required text and then present it as their work. Plagiarism is a form of intellectual fraud as it consists of deceiving someone else with what you present to them as your work, when in fact it originated from another person, source or AI system.

Leading academic journal Science recently placed a ban on ChatGPT being cited as a co-author in academic articles. Some schools in the US also blocked access to ChatGPT on their school systems. Universities have been struggling with the scourge of plagiarism for decades and have found some useful software where the work of students and academics can be tested for its originality. But ChatGPT has raised the bar for being caught for plagiarism as it does not merely copy and paste from other existing knowledge and information resources but generates unique written responses that are hard to detect with existing anti-plagiarism software.

There are a host of other ethical concerns about the use of AI in general and ChatGPT in particular. If you don’t believe me, just ask ChatGPT! Since ChatGPT has been found to be morally ambiguous, as argued above, what can be done about the dark side of this new chatbot? I will only reflect on the two dark sides of ChatGPT discussed above: bias and plagiarism.

What can be done?

The bias built into AI systems that results from the selective use of the data AI systems use to generate solutions, is something that can be addressed through human interventions. It is imperative that AI systems be continuously monitored and audited for whether some form of bias is built into them. Users of such systems should also be provided with a feedback mechanism to report bias or unfair discrimination they detect. Such corrective measures should not only be retrospectively applied but also prospectively.

Ethics experts should be part of the design of AI systems to ensure there is a focused approach on the prevention of bias in the design of AI systems. Autonomous self-learning AI systems might change this dynamic, which raises a serious question about the ethical desirability of such autonomous systems.

Tackling the potential proliferation of plagiarism as a result of AI solutions such as ChatGPT is another ethical imperative. Detecting plagiarism in the age of AI could probably only be achieved with the aid of AI. There are already several software solutions for detecting AI-generated text available. However, until such programs have proved themselves to be sufficiently reliable and accurate we probably have few other choices than to focus on the personal and professional integrity of persons who must generate original text as part of their role responsibility, like students, academics and journalists.

It is thus hugely important that academic and professional ethical integrity be cultivated in the age of AI. This can be done through extrinsic motivation where people are made aware of the dire consequences of plagiarism through the use of AI. However, such an approach is tiresome and difficult, as the perpetrators first have to be caught out before the dire consequences can be meted out.

Consequently, old-fashioned personal and professional integrity, where people identify with the ethical standards of their institutions or professions, remains a crucial defence against the new AI-inspired scourge of plagiarism.

• Prof Rossouw is CEO of The Ethics Institute and an extraordinary professor in philosophy at Stellenbosch University.

subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.