A wave of deepfakes in the election is not expected, but the risk remains

European Parliament buildings (left) and US Congress (right)

Noos News

  • Nando Castelline

    Technical Editor

  • Nando Castelline

    Technical Editor

At the beginning of this year, residents of the US state of New Hampshire received a call that appeared to be coming from President Biden: The Speaker of the House of Representatives advised them not to vote in the primaries. Only the real elections in November were counted.

Only it wasn’t the president making the call, but the AI ​​voice delivering the message, case in point com. deepfake. The calls came from a company in Texas. This company is now under investigation for illegal activities.

Misinformation about the election has been spreading for years, with Russia being mentioned in the past. 2024 is a special election year, in which more than half of the world’s population will go to the polls. Elections are already underway in India, with Europe scheduled to follow early next month, then the United States this fall.

So the stakes are high, while deepfakes have become increasingly easier to create, thanks to the rise of generative AI.

One deep is enough

“One audio file generated by artificial intelligence, a day or two before the election, is enough to have an impact on the vote,” said Tommaso Canetta, who coordinates fact-checking at the European Digital Media Observatory. It actually happened in Slovakia last year that put the leader of the Liberal Party in a bad position.

According to Canetta, audio is currently the most problematic alternative. With images generated using AI, you will often see (small) aberrations in the image. That was clearly visible With a picture By Frans Timmermans circulated on X last fall, the photo was clearly fake.

AI-generated videos are not yet so good that they are indistinguishable from the real thing, although Sora, a text-to-video model from OpenAI, could change that. Nowadays, it’s mostly videos where the audio is fake and lip-syncing is done.

“Vocal deepfakes are the most harmful because the average user can’t easily recognize them, especially if you don’t pay close attention to manner of speaking and grammar,” Canetta says. He stresses that there are good ways to identify these types of deepfakes, but they do not provide a 100 percent guarantee.

Listen to Joe Biden’s deepfake here. First you hear Biden’s real voice and then the fake one:

The fake is indistinguishable from the real: Listen to Joe Biden’s deepfake here

Canetta produces monthly reports on the number of fact checks conducted by European fact checking organizations and also monitors the number of fact checks conducted using artificial intelligence. In March, of 1,729 items verified, 87 were generated using AI, or 5 percent.

However, according to Canetta, you don’t even need a large number. However, deepfakes could have an impact on voters. Tom Dober, affiliated with the University of Amsterdam, also came to this conclusion with other researchers after testing. They had a committee watch a fake video of American Democratic politician Nancy Pelosi, in which she justified the storming of the Capitol.

Democrats were more negative toward Pelosi after that. At the same time, Dauber says that establishing a direct link between such an incident and the election results is very difficult.

Secondary role

Luc van Bakel, fact-checking coordinator at Flemish broadcaster VRT, expects a limited role for deepfakes in the European elections in Belgium and the Netherlands. “It’s one of those things that’s been added, and it’s a new way that’s been added.”

Ultimately, misinformation gains traction when it spreads widely, often through social media like TikTok, Facebook, Instagram, and X. “X has a lot of misinformation,” says Canetta. “But I think other platforms still have a lot to do as well.”

In response, TikTok and YouTube said they would remove the misleading videos. TikTok also emphasizes that it is not always possible to correctly identify material that has been manipulated using artificial intelligence. Meta (Facebook’s parent company) and X did not respond to NOS’s questions.

VRT’s Van Bakel also points to an undercurrent not visible to the public: private conversations in apps like WhatsApp. The video is believed to be circulating more frequently on public social media and the audio is being circulated more in places where deepfakes are less likely to be noticed.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top