subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now
Actress Gal Gadot with her face digitally superimposed over another body. The software used to achieve this has evolved to create human portrait videos in real time, all thanks to the integration of AI. Image: SUPPLIED
Actress Gal Gadot with her face digitally superimposed over another body. The software used to achieve this has evolved to create human portrait videos in real time, all thanks to the integration of AI. Image: SUPPLIED

Artificial intelligence is now so powerful it can trick people into believing an image of Pope Francis wearing a white, puffy Balenciaga coat is real, but the digital tools to reliably identify faked images are struggling to keep up with the pace of content generation. 

Just ask the researchers at Deakin University’s School of Information Technology, outside Melbourne. Their algorithm performed the best in identifying the altered images of celebrities in a set of deepfakes last year, according to Stanford University’s artificial intelligence index 2023. 

“It’s a fairly good performance,” said Chang-Tsun Li, a professor at Deakin’s Centre for Cyber Resilience and Trust who developed the algorithm, which proved correct 78% of the time. “But the technology is really still under development.” Li said the method needs to be further enhanced before it is ready for commercial use.

Deepfakes have been around, and prompting concern, for years. Former House Speaker Nancy Pelosi appeared to be slurring her words in a doctored video  in 2019 that circulated widely on social media. About a month later, CEO Mark Zuckerberg was seen in a video altered to make it seem that he had said something he didn’t, after Facebook earlier refused to take down the Pelosi video.

While the image of the pope in the puffer was a relatively harmless manipulation, the potential to inflict serious damage from deepfakes, from election manipulation to sex acts, has grown as the technology advances. Last year, a fake video of Ukraine President Volodymyr Zelensky asking his soldiers to surrender to Russia could have had serious repercussions. 

Big tech companies as well as a wave of start-ups have poured tens of billions of dollar into generative AI to claim a leading role in the technology that could change the face of everything from search engines to video games. However, the global market for technology to root out manipulated content is relatively small. According to research firm HSRC, the global market for deepfake detection was valued at $3.86bn in 2020 and is expected to expand at a compound annual growth rate of 42% to 2026.

Too fast

Experts agree there is undue attention on AI generation and not enough on detection, said Claire Leibowicz, head of the AI and Media Integrity Programme at nonprofit organisation The Partnership on AI. 

While the buzz around the technology, dominated by applications such as OpenAI’s ChatGPT, has reached fever pitch, executives from Tesla CEO Elon Musk to Alphabet CEO Sundar Pichai have warned of the risks of going too fast. 

It will be a while before detection tools are ready to be used to fight back against the wave of realistic-looking altered images from generative AI programs such as Midjourney, which produced the pope image, and OpenAI’s DALL-E. Part of the problem is the prohibitive cost of developing accurate detection, and there is little legal or financial incentive to do so.

“I talk to security leaders every day,” said Jeff Pollard, an analyst at Forrester Research. “They are concerned about generative AI. But when it comes to something like deepfake detection, that is not something they spend budget on. They have so many other problems.” 

Still, a handful of start-ups such as Netherlands-based Sensity AI and Estonia-based Sentinel are developing deepfake detection technology, as are many of the big tech companies. Intel launched its FakeCatcher product last November as part of its work in responsible AI. The technology looks for authentic clues in real videos by assessing human traits such as blood flow in the pixels of a video, and can detect fakes with 96% accuracy, according to the company.

“The motivation of doing deepfake detection now is not money; it is helping to decrease online disinformation,” said Ilke Demir, senior staff research scientist at Intel. 

Financial fraud

So far, deepfake detection start-ups mainly serve governments and businesses that want to reduce fraud and aren’t aimed at consumers. Reality Defender, a Y-Combinator-backed start-up, charges fees based on the number of scans it performs. Those costs range from tens of thousands of dollar to millions, to cover expensive graphics processing chips and cloud computing power. 

Platforms such as Facebook and Twitter are not required by law to detect and alert the deepfake content on their platforms, leaving consumers in the dark, said Ben Colman, CEO of Reality Defender. “The only organisations that do anything are the ones such as banks that have a direct connection to financial fraud.”   

Current methods of detecting fake images and videos involve comparing visual characteristics in the content by training computers to learn from examples and embedding watermarks and camera fingerprints on original works. But the rapid proliferation of deepfakes requires more powerful algorithms and computing resources, said Xuequan Lu, another Deakin University professor who worked on the algorithm.

And without a commercially available and hugely adopted tool to distinguish fake online content from real, there is much opportunity for bad actors.

“What I see is similar to what I saw in the early days of the antivirus industry,” said Ted Schlein, chair and general partner at Ballistic Ventures, who invests in deepfake detection and previously invested in antivirus software in the early days. As hacks became more sophisticated and damaging, antivirus software developed and eventually became cheap enough for consumers to download on PCs.

“We’re at the very beginning stages of deepfakes,” which so far is mostly being done for entertainment purposes. “Now you’re just starting to see a few of the malicious cases,” Schlein said.  

But even if it is cheap enough, consumers might be unwilling to pay for such technology, said Shuman Ghosemajumder, head of artificial intelligence at F5 Inc,  a security and fraud-prevention company. 

“Consumers don’t want to do any additional work themselves,” he said. “They want to automatically be protected as much as possible.”  

Bloomberg News. More stories like this are available on bloomberg.com

subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.