SCOTT TIMCKE AND MICHAEL HENDRICKSE: New frontiers of information manipulation hit voter choice
AI, persuasive technologies and digital manipulation are reshaping how political messages are spread
13 January 2025 - 05:00
byScott Timcke and Michael Hendrickse
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
We live in a volatile, uncertain, complex and ambiguous world, driven by the rapid advancement in technology post Covid-19, and the impact of climate change. Given current events there is an apparent regression in democracies on various continents.
All of this while most citizens face growing socioeconomic challenges of inequality, poverty and unemployment. Nowhere is this also more apparent than in the electoral democracies.
In affirming “the centrality of elections in the functioning, preservation and effectiveness of our constitutional democracy” in the matter of My Vote Counts v Minister of Justice and Correctional Services & Another, our Constitutional Court has stated that “what is implicitly envisioned by section 19 (of the constitution) is an informed exercise of the right to vote”.
This exercise requires that voters have access to information that allows them to make choices that are ultimately a collective expression of the will of the people. But what if the information the voter is receiving is fake, false or simply directed at causing mayhem, distrust and harm to the constitutional order?
The landscape of democratic elections is undergoing a radical transformation, driven by technologies that threaten the foundation of voter autonomy. What was once a straightforward contest of party programmes has evolved into a terrain in which artificial intelligence (AI), persuasive technologies and digital manipulation is reshaping how political information reaches voters.
In the past few years generative AI has emerged as a potent tool in political communication. These technologies can create targeted, strategic content at lightning speed, tailored to exploit individual fears and vulnerabilities. Unlike traditional campaign messaging, AI-generated content can be automated, distributed by algorithms, and designed to play on the deepest psychological triggers of potential voters.
We urgently need a comprehensive approach to protect electoral integrity.
One insidious aspect of these new technologies is their ability to operate beneath the awareness of users. It is no longer limited to printed posters on lamp poles, pamphlets distributed at street corners or dropped into post boxes, or radio ads. Persuasive technologies are no longer simply recommending content based on previous choices on social media platforms. Instead, they are designed to directly manipulate physiological and cognitive responses, potentially steering electoral choices without individuals’ realisation or consent.
Consider the landscape of digital manipulation. Voice cloning can spread disinformation through WhatsApp, AI-generated content can be indistinguishable from genuine political communication, and targeting technologies can create echo chambers of polarised information. These are not hypothetical threats — they are realities challenging the integrity of democratic processes in different parts of the world.
The Global South faces an even more precarious situation. Due to many different types of inequalities, a “detection gap” exists, leaving these regions particularly vulnerable to technological election interference. While developed countries begin to wrestle with regulatory frameworks, democracies in the developing countries often lack the technological infrastructure to identify and combat these manipulation techniques.
We urgently need a comprehensive approach to protect electoral integrity. Political parties and independents should be required to declare the use of AI in their communications. A binding code of conduct could mandate transparency about AI-generated content, ensuring voters understand the origins of the information they consume. Moreover, major platforms should collaborate directly with election management bodies, embedding staff who can make real-time decisions about potentially harmful content.
How we prepare for the appropriate uses of digital technology today may well determine the prospects for democratic electoral integrity tomorrow.
Given the seriousness and sometimes complexity of the legal terrain, these provisions must be directly enforced by the appropriate judicial forum, such as a specialist court in which penalties should include imprisonment, as well as disqualification and deregistration. In addition to flat rate fines, the penalty could be commensurate with the funding that the party or independent has received, as well as — in appropriate cases — the withholding or reduction in the allocation of public funds to the offending party or independent.
Another promising approach is the creation of comprehensive disinformation repositories — similar to the Australian Electoral Commission’s “reputation registers” — that not only document instances of disinformation but actively work to educate the public and serve as a quick reference for the media and commentators. Such repositories must be more than passive archives; they should be dynamic tools that proactively challenge false narratives and build public resilience against manipulation.
We may also consider more radical interventions. Banning election betting, which can incentivise the spread of disinformation to manipulate odds and outcomes, could be one such measure. Requiring political communication firms to register with electoral management bodies could provide another layer of accountability.
Platforms must also accept a measure of accountability, and given that the spread of disinformation is not limited to elections and cannot only be dealt with piecemeal, a statutory watchdog should be empowered to intervene and engage social media platforms to ensure active and sustained monitoring of their platforms for disinformation and hate speech, and require immediate reaction.
These technologies are not just changing how we communicate during the election process — they are fundamentally altering the nature of democratic choice. By exploiting physiological and emotional reactions, persuasive technologies may undermine autonomous decision-making and self-determination.
It is the responsibility of the Electoral Commission of SA (IEC) to deliver free and fair elections, and rightly so, as this is its constitutional mandate and duty as the country’s elections management body. But elections also require an environment that is conducive to free and fair elections, and for that the IEC cannot bear sole responsibility. This rests on all participants, state institutions, stakeholders and members of society in general.
A voter casts their ballot. File photo: MOTSHWARI MOFOKENG
As we look to future SA elections, legislators must recognise that protecting electoral democracy is no longer just about safeguarding ballot boxes. It is also about preserving the fundamental right of individuals to make conscious, informed choices free from technological manipulation. This will require unprecedented collaboration between technology companies, election management bodies, policymakers and civil society.
Such collaboration must be done within the values of our constitution, balancing of rights and ensuring any executive and or administrative action is justified within an open and democratic society. The Principles & Guidelines for the Use of Digital & Social Media in Elections in Africa, developed under auspices of the Association of African Electoral Authorities, is a welcome initiative aimed at enhancing the capacities of election management bodies and other relevant electoral stakeholders to harness the advantages of social media and tackle the adverse effects of new and emerging digital technologies.
The stakes could not be higher. How we prepare for the appropriate uses of digital technology today may well determine the prospects for democratic electoral integrity tomorrow.
• Timcke is senior research associate: research at ICT Africa, research associate at the University of Johannesburg's Centre for Social Change, and an affiliate of the Centre for Information, Technology & Public Life at the University of North Carolina at Chapel Hill. Hendrickse, a nonpractising attorney, is Western Cape provincial electoral officer with the Electoral Commission of SA.
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
SCOTT TIMCKE AND MICHAEL HENDRICKSE: New frontiers of information manipulation hit voter choice
AI, persuasive technologies and digital manipulation are reshaping how political messages are spread
We live in a volatile, uncertain, complex and ambiguous world, driven by the rapid advancement in technology post Covid-19, and the impact of climate change. Given current events there is an apparent regression in democracies on various continents.
All of this while most citizens face growing socioeconomic challenges of inequality, poverty and unemployment. Nowhere is this also more apparent than in the electoral democracies.
In affirming “the centrality of elections in the functioning, preservation and effectiveness of our constitutional democracy” in the matter of My Vote Counts v Minister of Justice and Correctional Services & Another, our Constitutional Court has stated that “what is implicitly envisioned by section 19 (of the constitution) is an informed exercise of the right to vote”.
This exercise requires that voters have access to information that allows them to make choices that are ultimately a collective expression of the will of the people. But what if the information the voter is receiving is fake, false or simply directed at causing mayhem, distrust and harm to the constitutional order?
The landscape of democratic elections is undergoing a radical transformation, driven by technologies that threaten the foundation of voter autonomy. What was once a straightforward contest of party programmes has evolved into a terrain in which artificial intelligence (AI), persuasive technologies and digital manipulation is reshaping how political information reaches voters.
In the past few years generative AI has emerged as a potent tool in political communication. These technologies can create targeted, strategic content at lightning speed, tailored to exploit individual fears and vulnerabilities. Unlike traditional campaign messaging, AI-generated content can be automated, distributed by algorithms, and designed to play on the deepest psychological triggers of potential voters.
One insidious aspect of these new technologies is their ability to operate beneath the awareness of users. It is no longer limited to printed posters on lamp poles, pamphlets distributed at street corners or dropped into post boxes, or radio ads. Persuasive technologies are no longer simply recommending content based on previous choices on social media platforms. Instead, they are designed to directly manipulate physiological and cognitive responses, potentially steering electoral choices without individuals’ realisation or consent.
Consider the landscape of digital manipulation. Voice cloning can spread disinformation through WhatsApp, AI-generated content can be indistinguishable from genuine political communication, and targeting technologies can create echo chambers of polarised information. These are not hypothetical threats — they are realities challenging the integrity of democratic processes in different parts of the world.
The Global South faces an even more precarious situation. Due to many different types of inequalities, a “detection gap” exists, leaving these regions particularly vulnerable to technological election interference. While developed countries begin to wrestle with regulatory frameworks, democracies in the developing countries often lack the technological infrastructure to identify and combat these manipulation techniques.
We urgently need a comprehensive approach to protect electoral integrity. Political parties and independents should be required to declare the use of AI in their communications. A binding code of conduct could mandate transparency about AI-generated content, ensuring voters understand the origins of the information they consume. Moreover, major platforms should collaborate directly with election management bodies, embedding staff who can make real-time decisions about potentially harmful content.
Given the seriousness and sometimes complexity of the legal terrain, these provisions must be directly enforced by the appropriate judicial forum, such as a specialist court in which penalties should include imprisonment, as well as disqualification and deregistration. In addition to flat rate fines, the penalty could be commensurate with the funding that the party or independent has received, as well as — in appropriate cases — the withholding or reduction in the allocation of public funds to the offending party or independent.
Another promising approach is the creation of comprehensive disinformation repositories — similar to the Australian Electoral Commission’s “reputation registers” — that not only document instances of disinformation but actively work to educate the public and serve as a quick reference for the media and commentators. Such repositories must be more than passive archives; they should be dynamic tools that proactively challenge false narratives and build public resilience against manipulation.
We may also consider more radical interventions. Banning election betting, which can incentivise the spread of disinformation to manipulate odds and outcomes, could be one such measure. Requiring political communication firms to register with electoral management bodies could provide another layer of accountability.
ARTHUR GOLDSTUCK: AI the new enemy of free and fair elections
Platforms must also accept a measure of accountability, and given that the spread of disinformation is not limited to elections and cannot only be dealt with piecemeal, a statutory watchdog should be empowered to intervene and engage social media platforms to ensure active and sustained monitoring of their platforms for disinformation and hate speech, and require immediate reaction.
These technologies are not just changing how we communicate during the election process — they are fundamentally altering the nature of democratic choice. By exploiting physiological and emotional reactions, persuasive technologies may undermine autonomous decision-making and self-determination.
It is the responsibility of the Electoral Commission of SA (IEC) to deliver free and fair elections, and rightly so, as this is its constitutional mandate and duty as the country’s elections management body. But elections also require an environment that is conducive to free and fair elections, and for that the IEC cannot bear sole responsibility. This rests on all participants, state institutions, stakeholders and members of society in general.
As we look to future SA elections, legislators must recognise that protecting electoral democracy is no longer just about safeguarding ballot boxes. It is also about preserving the fundamental right of individuals to make conscious, informed choices free from technological manipulation. This will require unprecedented collaboration between technology companies, election management bodies, policymakers and civil society.
Such collaboration must be done within the values of our constitution, balancing of rights and ensuring any executive and or administrative action is justified within an open and democratic society. The Principles & Guidelines for the Use of Digital & Social Media in Elections in Africa, developed under auspices of the Association of African Electoral Authorities, is a welcome initiative aimed at enhancing the capacities of election management bodies and other relevant electoral stakeholders to harness the advantages of social media and tackle the adverse effects of new and emerging digital technologies.
The stakes could not be higher. How we prepare for the appropriate uses of digital technology today may well determine the prospects for democratic electoral integrity tomorrow.
• Timcke is senior research associate: research at ICT Africa, research associate at the University of Johannesburg's Centre for Social Change, and an affiliate of the Centre for Information, Technology & Public Life at the University of North Carolina at Chapel Hill. Hendrickse, a nonpractising attorney, is Western Cape provincial electoral officer with the Electoral Commission of SA.
SCOTT TIMCKE AND ANDREW RENS: Upgrading digital public infrastructure is less glamorous but more important
SCOTT TIMCKE: AI bubble? Big promises and bigger questions
SCOTT TIMCKE AND ANDREW RENS: Clear industrial policies vital for shaping AI uptake
SCOTT TIMCKE: Dirty data tricks could undermine SA’s election integrity
SCOTT TIMCKE AND HANANI HLOMANI: Protecting African elections with and from AI
SCOTT TIMCKE: Beware democracy entrepreneurs with thin civic credentials
Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.
Most Read
Related Articles
SCOTT TIMCKE AND ANDREW RENS: Upgrading digital public infrastructure is less ...
SCOTT TIMCKE: AI bubble? Big promises and bigger questions
SCOTT TIMCKE AND ANDREW RENS: Clear industrial policies vital for shaping AI ...
SCOTT TIMCKE: Dirty data tricks could undermine SA’s election integrity
SCOTT TIMCKE AND HANANI HLOMANI: Protecting African elections with and from AI
SCOTT TIMCKE: Beware democracy entrepreneurs with thin civic credentials
Published by Arena Holdings and distributed with the Financial Mail on the last Thursday of every month except December and January.