It’s becoming increasingly easy to create fake news using AI: ‘Soon we won’t know anymore what we can trust’

It's becoming increasingly easy to create fake news using AI: 'Soon we won't know anymore what we can trust'

Image generation technology using artificial intelligence is becoming increasingly available and is therefore increasingly being used to spread fake news. Experts warn that identifying what is real and what is not is becoming increasingly difficult and dangerous.

Developments in the field of artificial intelligence (AI) are occurring at a rapid pace. While a few years ago, creating a fake video required a lot of effort, nowadays it can be done simply on a smartphone. This has serious consequences for our news consumption: how do you know what is real?

AI news readers on social media

For example, the Chinese government is widely using artificial intelligence to spread disguised propaganda in the form of news on social media, the British newspaper reported. Watchman today. According to the American newspaper Washington Post The terrorist group Islamic State (IS) is using AI-based newsreaders to spread their extremist message online.

Now, using new technology to influence people is nothing new. Examples include the 2020 presidential election in the United States, or the MH17 disaster in 2014. Even then, security services had warned about Russian botnets spreading fake news on a large scale on the Internet.

“”very easy””

But thanks to artificial intelligence, we have now entered a new phase, believes Vilene Hermans, professor of computer science at Vrije Universiteit Amsterdam. “I think technology now makes it easier to make videos where it looks like someone is doing something.”

Although not all videos are equally credible, she says. “With some of these videos, you can now hear very clearly that it is computer-generated audio.” But sometimes it’s almost indistinguishable from the real thing: “How can you, as a user, while scrolling, differentiate between: This is real or this is fake?”

How to recognize a fake video?

The professor explains that there are a number of tricks to identify whether an image has been created using artificial intelligence. “Look at the hands, because they’re very difficult to make. So, if someone has six fingers, that’s not true. Sometimes you can also see that a little bit in facial expressions. As humans, we’re of course very good at being a real person to recognize them.” “

She continues, saying that it is also important to check whether there are more sources that support the image. “Are there other people talking about this? What expert opinion would you offer as advice?” “But that of course takes a lot of effort. If you’re just scrolling, it’s hard to look at every video to see if you can find a source for it.”

Denial, because everything is fake

In addition, image manipulation by AI also ensures that people are able to deny that they said or did certain things. Hermans cites the “grab them by the pussy” riots surrounding Donald Trump as an example. An audiotape in which the then-presidential candidate made the controversial statement was leaked in 2016 just before the election.

“Trump was then able to say, ‘Well, I just did that, and it’s not that bad at all,'” he added. But he couldn’t say: “That wasn’t me.” This has now changed due to all the fake videos circulating online, says the professor. “It’s very easy now to say, ‘That was fake.'”

“Danger of people leaving”

According to her, this is the biggest danger. “Because there are fake videos, soon we won’t know what we can trust,” she explains. “Soon you will have information of all kinds, which may contain some truth, but with a lot of information surrounding it that is not quite true or is a little distorted.”

The result of this may be that people eventually “get fed up with the news,” warns Hermans: “Like: yeah, we don’t care anymore. It’s all fake anyway.” This is the real danger.”

The importance of journalism

This can also damage the reputation of the press. That Amrop Brabant You have online videos delivered by an AI newsreaderShe doesn’t think it’s a good idea. “Even though it’s fun and you want to try out all this cool technology, in the framework you really lean into the fungibility of what you’re doing.”

Hermans stresses that while the distribution of fake videos shows how important the work of journalists is, “They do more than just tell something,” she concludes. “They also conduct research, interview people, and bring together different points of view.” “That’s why I think being a journalist is so much more than that, and you can’t or don’t want to use algorithms for it.”

Enforce transparency

But is there anything that can be done about it? In the past, governments were often lagging behind in digital developments. “You have to be careful not to adopt a fatalistic attitude,” says the professor. She considers it a good sign that the European Union is trying to regulate artificial intelligence through two new laws – the Digital Services Act and the Artificial Intelligence Act.

“It’s important that we do something about it,” Hermans finally says. According to her, more transparency should be applied. You can demand: Every software that can create deep fakes should have a watermark: “Attention! “This was created using artificial intelligence.” These companies would never want that, but you could enforce that through legislation, and that would help.

Audio playback

Artificial intelligence-driven fake news makes it increasingly difficult to estimate what is real

Asks? Ask them!

Do you have any questions or would like answers? Send us a message here in the chat. Every Thursday in the Get Involved newsletter we tell you what we’re doing with all the responses. Want it in your email? Then register here.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top