Is artificial intelligence dangerous? Other questions about artificial intelligence have been answered

Is artificial intelligence dangerous?  Other questions about artificial intelligence have been answered

The arrival of ChatGPT shows that we must prepare for a future in which artificial intelligence (AI) plays an important role. This article answers your frequently asked questions.

Artificial intelligence will affect the whole of society, and this is evident, according to Professor of Artificial Intelligence at Utrecht University, Mehdi Dastani. “It is a ‘system technology’, comparable to electricity or a steam engine, that influences how our world is organized: the labor market, our behavior and our ways of thinking.”

1. What are the potential risks of AI?

One example of how AI is affecting the way we think and behave, Dastani says, is that companies send more and “better” ads. “By doing so, they influence what we buy and how we feel about things.”

And there are risks with this, too: “If AI becomes an essential part of our daily lives, we will become dependent on it. Think of the speed with which we have become addicted to technologies such as navigation systems and translation applications. Now new technologies – with the help of AI – are being developed so quickly that they are not Properly tested and evaluated. If we start using it collectively, it will make us vulnerable.”

This is confirmed by researchers at the Institute for Advanced Computer Science (LIACS), Max van Duyn and Tom Koenhoven. “You can see our weakness now, for example, in ChatGPT, an intelligent chatbot. The tool is now very popular. And people play with it because they find it cool. But that makes them tend to ignore how it can be used for free, and throw it in,” says Van Duijn. your lap.” The researcher suspects that the company behind ChatGPT mainly wants to make a profit in the long run. “Through our game we contribute to the development of the paid subscription.”

Kouwenhoven mentions another danger: “We tend to blindly assume what artificial intelligence, like ChatGPT, is giving us. This is risky, if only because the data a tool like ChatGPT is trained with is usually a few months old. We therefore really need to develop a critical eye.” to use these kinds of tools.

information

See also  WhatsApp beta allows users to recover some deleted messages - Computer - News

asks today

This article contains answers to questions submitted via EenVandaag Asks. With EenVandaag, we ask you to influence what we make. Would you like to join? Then download the EenVandaag Poll App, go to Settings and turn on your notifications for EenVandaag Asks. You can find the questions and answers under “Join us”. Polling can be downloaded from EenVandaag for free in app store or Play Store.

2. How do we keep control of the development of artificial intelligence, or are we already behind?

“We are not late,” says Professor Dastani. “But developments are moving very quickly. This is due to three things: First, more and more data is being kept about how we think, how we write, and how we act. Second, computers are getting faster and cheaper. And third, there are now hundreds of thousands of AI experts in All over the world. And big companies benefit from all three of those things. They have the data, they have the computing power and the top experts.”

There is nothing wrong with that in and of itself, says Dastani. “It is part of a free economy.” But tech experts like Elon Musk warned in a speech last week about the dangers of artificial intelligence. They ask for commissions, international legislation, quality marks and certificates. Dastani: “The problem is that companies are going through massive technological developments, but we can’t keep up with them legally. Laws are being made, but that process is much slower than technological developments.”

Governments and legal bodies have no control over developments. So some tech experts want to stop tech companies. “This, of course, is not in the nature of the business community, which seeks to achieve the maximum possible,” says the professor.

3. Is it realistic to care about the possibility of technology controlling people?

It’s something we often see in movies or books: robots break free from the cage of programmers and go their own way, van Duygen and Kouwenhoven laugh. Van Duijn: “The question is what exactly are we afraid of. I think it’s a justified fear that people or parties with questionable interests will better control us through AI, and will misuse it. But we won’t be too soon to be completely dominated by autonomous technologies. “

See also  This cheap Samsung phone is smooth and receives updates for a long time

Van Duijn believes that technology will get smarter in the next decade. “Technologies are sometimes very smart sometimes, but they are basically in one domain. I think the number of domains will increase and overlap. In the next decade we will definitely be amazed at what technology can do, but until now people have always been smart and proven themselves innovative enough To deal with it, and also to increase their intelligence with it.”

A few years ago, the Google computer defeated the world champion in the board game Go, a game from Asia comparable to chess. Scientific papers were published last month showing that people have now developed tactics to win the computer back, so even if people are defeated, they can move on. themselves and come up with new creative solutions.

4. Can we be sure that AI is “telling the truth”?

“This is really a question that a new tool like ChatGPT is asking,” Dastani says. “People often think any ChatGPT answers are correct. But that doesn’t have to be the case.”

“The truth is a spin-off feature of ChatGPT,” says Van Duijn. “The tool is primarily designed to craft new language in the most natural way possible. And to act as a useful assistant. That’s not to say the tool makes everything, but ‘truth’ isn’t the first priority.”

The company behind ChatGPT has committed to developing systems that can estimate whether or not a story is written by ChatGPT. But these systems do not exist yet. Kouwenhoven: “You can ask the tool how sure it is of the answer, on a scale of 1 to 10, for example. But whether or not the answer is ‘correct’ depends on the data it’s been trained on.” Therefore, the model itself has no knowledge of right or wrong, and it remains human work to estimate that.”

5. How do you know from which sources the AI ​​gets its information and how reliable the results are?

Kouwenhoven and Van Duijn emphasize that very little information has been released about ChatGPT. They explain that the tool takes information from various sources and combines that information into an answer. “The tool itself doesn’t know exactly where you got it from. You can ask where the information came from, but then the tool will probably only look for the source that matches the information you provided earlier — not as many name places as where certain pieces of information come from,” he says. Van Duijn.

See also  Intel: Alder Lake CPU DRM issues resolved for all games - PC - News

Kouwenhoven: “Artificial intelligence is of course much broader than ChatGPT alone. But the following applies to all systems: if the data a tool is entered into is incorrect, an incorrect answer will also be produced. One tool can really develop more than the data given to it.”

Dastani: “There are many tools that make predictive links between different data and then come to a prediction. For example, if a tool receives data about income and gender, it can come to the conclusion that women’s income is lower than that of men. It may be true , but may not be desirable or applicable in an individual case.

6. What is an example of a useful or helpful outcome for an AI?

There are many advantages, according to Prof. Dastani. “Artificial intelligence can take over work that few people want to do. And in science it can lead to new discoveries. AI can help in the medical sciences, for example, when it is used to compare DNA properties to specific diseases.”

Al-Dastani asserts that the development of artificial intelligence is not a bad thing. “The problem is we have to learn how to use it. Just like with a calculator. It’s not a big deal, but we had to figure out which tests we allow and which we don’t.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top