subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now
Picture: 123RF/LIGHTWISE
Picture: 123RF/LIGHTWISE

I’m sorry Dave. I am afraid I can’t do that.” Diehard science fiction fans might get the reference from the cult classic 2001: A Space Odyssey. The line, delivered by the spaceship’s intelligent on-board computer HAL 9000, represents the moment the downside of sharing a spacecraft with a machine significantly smarter than you is revealed — with dire consequences for the ship’s captain. 

But Stanley Kubrick’s sci-fi epic is certainly not the only pop culture example that graphically depicts just how pear-shaped things can go when mankind cedes control to machine: search movies that relate to artificial intelligence (AI) and a long list of dystopian tales springs up.

Yet it seems our real-life fears relate more to economic rather than physical survival. Whether AI will threaten jobs is a concern that began to rise around the time of the Third Industrial Revolution, when mechanisation evolved to automation and digitalisation. 

Can AI think like a human? 

To determine whether this fear is founded we need to determine whether AI can truly emulate the thinking processes of humans, by taking a look at the landscape. 

AI is not entirely autonomous and most of the commercial economic value it now generates is enabled through supervised learning models — meaning it relies on algorithms to operate but needs human intervention to validate learning. The sentient version, like the eerie HAL, remains very much in the realm of science fiction. 

Even deep learning models — still in their infancy — rely on neural networks to function. So while their decisions might look like magic they are only the product of a complex algorithmic process. Unsupervised learning models can figure things out on their own yet they’re also far from autonomous; they simply assimilate data in ways that are difficult to unthread. 

Within the realm of SA financial services we often engage transfer learning models in the form of bots. We don’t build these models from scratch because that takes enormous capital investment  we replicate and transfer learning from overseas models, which we then make relevant to our market by programming in local nuances.

So while AI cannot now make judgments without human intervention, will it be able to in future? Consider that computers make use of sensors while human beings use senses, which are far more complex and intuitive input systems that enable us to feel as well as to think. Tech can convert things to data, but not feelings.

In testing this, a research excerpt from Marcus, Rossi & Veloso (2016) references the Turing Test, which seeks to determine the ability of tech to fool people into thinking machines are human. While advancing AI can approximate passion and intelligence, empathy and judgment are derived from sensory input that machines are incapable of experiencing.

Human behaviour, such as passion, motivation and intelligence, can be emulated through AI. However, in humans, if passion is not balanced with empathyand if intelligence is not balanced with judgment  antisocial behaviour is probable. It is empathy and judgment that make us human, and which are hard to replicate through algorithms.

Thus in answer to, “is society approaching its ‘Sorry Dave’ moment?” Nope. Computers cannot think like humans, and are unlikely to be able to any time soon. 

From man to machine

Here’s a curveball: there’s a far greater risk to humanity that people will become like machines. 

Consider the technological evolution of human physicality — mind-controlled prosthetic limbs, 3D organ replacements and DNA splicing juxtaposed with the influence of tech on human cognitive development, social media governance of relationships and work-from-home dulling sensory interactions — and a very real threat exists. 

The prevalence of cyberbullying shows that many people are losing their conscience behind their device screens every day. When our body and emotions are no longer central to our identity, we start to cross the threshold from man to machine.

We should be excited about the potential AI offers for business and society. However, clear ethics-driven frameworks must be put in place as innovation will always precede regulation. 

In business we need to keep pace with AI or risk becoming redundant. Company leaders generally realise this but tend to relegate it to the domain of data scientists. Yet when we realise the threat tech can have on humanity, how can we not start having these discussions in the boardroom? 

It’s the responsibility of business leaders to drive the creation of AI governance frameworks to ensure ethical boundaries remain clear and human rights are protected. These frameworks should cover all aspects; from fairness and minimisation of algorithmic bias to the creation of ethical standards that prevent harm, which is something that should concern all business leaders. We need to have a holistic and strategic understanding of AI models and their evolution so we can anticipate possible threats to our stakeholders. 

As we inch towards the Fifth Industrial Revolution, there’s a growing consciousness about impact. We understand that AI can make us better, but if we don’t exercise human judgment, we run the risk of relinquishing our humanity.

Human society is contingent on trust. To add value to us, we need to be able to trust the technology that supports us, while realising it’s our responsibility to make it trustworthy. 

• Hieckmann is head of Metropolitan GetUp.

subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.