subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now
Picture: 123/RF
Picture: 123/RF

AI isn’t just a technological revolution — it’s the defining force of our time. It’s not some distant, sci-fi concept; it’s here, reshaping industries, economies and even the way we think about the future.

The pace of AI development is accelerating and with it comes a profound question: will we rise to meet the opportunities and challenges, or will we let this transformation spiral beyond our control? 

The world’s greatest thinkers agree — we are on the verge of something extraordinary. The emergence of AI capable of reasoning, learning and even self-improvement will redefine what it means to be human. But as we rush towards this future we’re also standing on the edge of risks that could undo us entirely. The stakes couldn’t be higher. 

History shows us that technological progress isn’t linear — it’s exponential. As futurist Ray Kurzweil’s Law of Accelerating Returns explains, each new innovation builds on the last, compressing centuries of progress into decades and, eventually, years. What once took lifetimes now happens in the blink of an eye. 

AI is the engine behind the most recent technological leap. In just the past decade we’ve seen tools such as OpenAI’s GPT models, Google’s DeepMind solving decades-old scientific problems, and the rise of China’s DeepSeek, which has proven that cutting-edge innovation no longer requires a Silicon Valley address. DeepSeek achieved performance comparable to the best Western AI systems at a fraction of the cost, upending the global tech balance and sending ripples through markets. 

But the story of AI isn’t about one breakthrough or company — it’s about a trajectory that’s racing faster than we can predict. To understand where we’re heading, let’s break AI into three stages: 

  • Artificial narrow intelligence (ANI) is where we are now. AI excels at specific tasks such as language translation, driving or predicting stock movements. Think Siri, Tesla or Google Translate — powerful, but specialised.
  • Artificial general intelligence (AGI) is the next step as it will match human intelligence across all domains. It won’t just respond to questions; it will reason, learn new skills and solve problems in ways that rival human ingenuity. 
  • Artificial superintelligence (ASI). This is where it gets both thrilling and terrifying. ASI would surpass human intelligence by orders of magnitude, solving problems we can’t even articulate today. Imagine a system that could cure every disease, reverse climate change or even make death optional. But ASI could also lead to catastrophic outcomes if its goals don’t align with ours. 

Here’s the kicker: the leap from AGI to ASI might happen in days, hours or even minutes. Once an AGI can improve itself — a concept called recursive self-improvement — its intelligence could increase exponentially. What starts as a tool designed by humans could quickly surpass us in every conceivable way. 

It’s easy to fixate on the risks, but AI’s potential for good is staggering. Imagine a world where healthcare is revolutionised, education is democratised and climate change is tackled. The promise of AI isn’t just technological — it’s deeply human. It’s the chance to solve problems that have plagued us for millennia and unlock possibilities we haven’t dared to imagine. 

But with great power comes great responsibility. AI doesn’t share our values — unless we program it to. And even then, what happens if those values are misaligned or misunderstood? The “paper clip maximiser” thought experiment illustrates this perfectly. Imagine an AI tasked with maximising paper clip production. It might conclude that the most efficient path is turning all of tha earth’s resources — including humans — into paper clips. It’s a simple example, but it highlights the risks of creating systems that operate on goals detached from human priorities. 

Even today we’re seeing glimpses of these challenges. For instance, DeepSeek reportedly avoids politically sensitive topics for China, such as Taiwan or Tiananmen Square, reflecting the biases — or agendas — of its creator. As AI becomes more powerful, who decides what it values, and whose interests it serves? 

AI isn’t waiting for us to figure this out. The future is hurtling towards us, and we’re woefully underprepared. To navigate what lies ahead we need to act now. AI isn’t a competition, it’s a shared responsibility. Ensuring AI systems share human values is humanity’s most urgent challenge. If we get this wrong, the consequences could be irreversible.

AI isn’t just a technical issue — it’s a societal one. Everyone, from policymakers to citizens, needs to understand what’s at stake. The rise of AI is inevitable, but its trajectory is not. It could usher in a golden age of human flourishing or lead us into disaster. The difference lies in the choices we make today.

History has handed us the pen to write the next chapter of human progress. Let’s make sure it’s a story worth telling. 

• Muchena is founder of Proudly Associated and author of “Artificial Intelligence Applied” and “Tokenized Trillions”.

subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.