How deepfakes can manipulate voters
Researchers are concerned about how artificial intelligence will affect political campaigning for elections in 2024
Buenos Aires — In the final weeks of campaigning, Argentine president-elect Javier Milei published a fabricated image depicting his Peronist rival, Sergio Massa, as an old-fashioned communist in military garb, his hand raised aloft in salute.
The apparently AI-generated image drew about 3-million views when Milei posted it on a social media account, highlighting how the rival campaign teams used artificial intelligence (AI) technology to catch voters’ attention in a bid to sway the race.
“There were troubling signs of AI use” in the election, said Darrell West, a senior fellow at the Center for Technology Innovation at the Washington DC-based Brookings Institution.
“Campaigners used AI to deliver deceptive messages to voters, and this is a risk for any election process,” he said.
Right-wing libertarian Milei won Sunday’s run-off with 56% of the vote as he tapped into voter anger with the political mainstream — including Massa’s dominant Peronist party, but both sides turned to AI during the fractious election campaign.
Massa’s team distributed a series of stylised AI-generated images and videos through an unofficial Instagram account named “AI for the Homeland”.
In one, the centre-left economy minister was depicted as a Roman emperor. In others, he was shown as a boxer knocking out a rival, starring on a fake cover of New Yorker magazine and as a soldier in footage from the 1917 war film.
Other AI-generated images set out to undermine and vilify Milei, portraying the wild-haired economist and his team as enraged zombies and pirates.
The use of increasingly accessible AI tech in political campaigning is a global trend, tech and rights specialists say, raising concerns about the potential implications for important upcoming elections in countries including the US, Indonesia and India in 2024.
A slew of new “generative AI” tools such as Midjourney are making it cheap and easy to create fabricated pictures and videos.
With few legal safeguards in many countries, there is growing unease about how such material could be used to mislead or confuse voters in the run-up to elections.
“Around the world, these tools to create fake images are being used to try to demonise the opposition,” said West.
“While it is not illegal to use AI-generated content in hardly any country, images portraying people saying things they didn’t or making stuff up clearly crosses an ethical line.”
Most of the AI-generated images used in the Argentine election campaign were satirical in flavour, seeking to elicit an emotional reaction from voters and spread rapidly on social media.
But AI algorithms can also be trained on copious online footage to create realistic but fabricated images, voice recordings and videos — so-called deepfakes.
During the recent campaign, a doctored video that appeared to show Massa using drugs circulated on social media, with existing footage manipulated to add Massa’s image and voice.
It is a dangerous new frontier in fake news and disinformation, researchers say, with some calling for material containing deepfake images to carry a disclosure label saying they were generated using AI.
“Now they have a tool that allows them to create things from scratch, even though it’s evident that it may be artificially generated,” West said, adding that “disclosure alone does not protect people from harm”.
“It is going to be a huge problem in global elections in the future as it will get increasingly harder for voters to distinguish the fake from the real,” he said.
As AI-generated content becomes more accessible and more convincing, social media platforms and regulators are struggling to stay ahead, said disinformation researcher Richard Kuchta, who works at Reset, a group that focuses on the technology’s effect on democracy.
“It is clearly a cat-and-mouse game,” Kuchta said. “If you look at how misinformation works during an election, it is still pretty much the same. But ... it got massively upscaled in terms of how deceiving it can get.”
He cited a case in Slovakia earlier in 2023, in which fact-checkers scrambled to verify faked audio recordings posted on Facebook just days before the country’s September 30 election.
In the tape, a voice resembling one of the candidates appeared to be discussing how to rig the election.
“Eventually, the piece was dismissed as fake, but it did a lot of harm,” Kuchta said.
Meta Platforms, which owns Facebook and Instagram, said in November that from 2024 advertisers will have to disclose when AI or other digital methods are used to alter or create political, social or election related advertisements on the sites.
In the US, a bipartisan group of senators has proposed legislation to prohibit “distribution of materially deceptive AI-generated audio, images, or video relating to federal candidates in political ads or certain issue ads”.
Additionally, the US Federal Election Commission wants to regulate AI-generated deepfakes in political adverts to safeguard voters against disinformation ahead of the 2024 presidential election.
Other countries are leading similar efforts, though no such regulatory proposals have yet been presented in Argentina.
“We are still in the early stages of AI,” said Olivia Sohr, a journalist at the Argentine fact-checker NGO Chequeado, noting that most of the fake information circulated during the campaign involved fabricated newspaper headlines and false quotes attributed to a specific candidate.
“AI has the potential to elevate disinformation to a new level. But for now, there are other equally effective ways that fulfil their goals without necessarily being as expensive or sophisticated.”
Thomson Reuters Foundation
Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.