Society must prepare for incidents involving AI

Society must prepare for incidents involving AI
ChatGPT on phone

News News

Dutch society must prepare for incidents involving AI. According to the Dutch Data Protection Authority (AP), which regulates privacy, everyone from ministers and big companies to citizens should be careful when using AI.

The regulator sees AI as increasingly used in society, and the risk of accidents is increasing. The Associated Press says that if organizations doubt whether the risks are under control, they should be cautious about using AI.

to cheat

As an example of such an incident, the regulator cites a recent scam in Asia. There, a finance department employee received an email asking him to transfer €23 million. The employee did not trust it at first, but after an online meeting within the company, he transferred the money anyway.

However, with the help of AI, the CFO and all the direct colleagues in the meeting were imitated. The cybercriminals did this using deepfake technology. The fraudulent employee was the only real person in the group.

According to the regulator, people should become aware of the risks of AI and there should also be more knowledge on the subject.

Technology is evolving rapidly.

The Dutch Data Protection Authority believes that governments are dealing with AI “decisively”. There is good oversight there and cybercrime is well-combated. However, it was stressed that the technology is developing rapidly and therefore it is a matter of patience.

Research also shows that a significant number of councillors within municipalities question whether they have sufficient knowledge of AI and the systems used within the organisation. The AP is calling for strengthening within this group.

Criticism of big companies

The Associated Press criticizes big companies that, according to the regulator, want to bring apps to market as quickly as possible. They see that this regularly goes wrong and that the systems should be withdrawn, for example.

Government services such as the government, provinces and municipalities keep a register of the forms of AI they use. The regulator also wants semi-public organisations to share what they do. This concerns, for example, education, healthcare, housing associations and public transport because AI is used there in situations where people are vulnerable.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top