subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now
AI-powered disinformation has the power to influence 2024’s election outcomes, the writer says. Picture: 123RF
AI-powered disinformation has the power to influence 2024’s election outcomes, the writer says. Picture: 123RF

Picture this: It’s the day before we head to the polls. You watch a video that shows a prominent politician paying a bribe to an Eskom official. The bribe is paid to manipulate load-shedding and thereby irresponsibly increase the risk of national grid failure.

You have had it with a deteriorating state and this convinces you to vote differently to how you would have before watching the video. Two weeks after the election results are in, the video (even though it was “forwarded many times”) is proven to be a deepfake.

At the risk of being accused of describing a reality that parodies a Black Mirror episode, both the Slovakian and the Bangladeshi elections have shown this is not a far-fetched outcome. The key question will be whether our existing guardrails will suffice in combating this phenomenon.

Societal norms evolve over time. This evolution is largely driven by a constant renegotiation of the boundaries of what society will accept. And, as any toddler’s parent will tell you, establishing boundaries is by no means painless. I believe we are in the throes of one such boundary-defining moment, in what looks to be a face-off between free markets, technopolies and democracy.

Financial Times recently published a fascinating piece, “The rising threat to democracy of AI-powered disinformation”. The article highlighted the power of disinformation to nefariously influence 2024’s election outcomes, where more than half the world’s adult population will be voting. While this has always been a reality, FT argues that technological advancements will result in a material amplification of the threat. Specifically, what generative AI’s exponential popularity has taught us is that AI is incredibly good at lying convincingly, and doing it at scale.

The key to defeating disinformation is being able to detect the fake news, and then correcting it in the time window where it matters. A culture of posts “going viral” means it is that much harder to pull back even when information is disproved. Think about how meaningless a retraction is to an op-ed from a respected source. This problem becomes even more acute when you have seen or heard something from the horse’s mouth.

To understand the extent of free market boundary renegotiation required in the face of this threat, it would be good to remind ourselves of two news stories affecting public and private sector norms.

In January tech billionaire Elon Musk was reported to be a user of illegal drugs. Putting aside personal views on recreational drug use, this is in direct contravention of his SpaceX contract. What has been interesting about this story, together with his purchase of X and the blue tick verification chaos, is that it has provoked the question of “too big to fail”. If the accusations are true, can the US really afford to withdraw multibillion-dollar contracts due to what some may view as bad behaviour?

Sam Altman’s departure and then rejoining of Open AI caused a (temporary) ruckus in upending the notions of acceptable corporate governance and also gave us pause for what employee power really meant. The board thought it was the ultimate arbiter of company strategic direction, when in fact Microsoft, despite not having a board seat, could engineer Altman’s reinstatement by a combination of its own job offer and an employee-backed exodus to follow — a threat that by all accounts was a bluff, but a strong enough one to ensure Altman returned to the helm.

If the rules of engagement are shifting so radically, can we genuinely rely on these selfsame companies to be the hall monitors of our collective democratic destinies?  Perhaps, instead, we should accept the fact that democracies are not only about elections — and elections are often more about personality and charisma than facts and policy.

And there’s the rub. What we have seen with generative AI is that it is about using adjacency for predictive purposes (how close words are together to enable the model to predict the answer) rather than understanding sequence, cause and logic. This lends itself better to convincing deep fakery in an election climate than to personality. Revisiting the thought experiment above, if deep fakery becomes a globally widespread phenomenon, society will have to wrestle with what that means for our democratic institutions.

The knee-jerk response would be to regulate the hell out of the models, but more nuance is required, because this is a philosophical reckoning rather than a scientific one.

As tempting as it is, we should not delude ourselves into believing societies are simply the sum of our data. Either way, all we can say for certain is that 2024 is going to be a generationally defining year.

• Bassier is COO at Ninety One.

subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.