It’s as if tech companies hit the gas pedal, then hit the brakes again. Over the past week and a half, a series of major companies in the artificial intelligence (AI) space have announced all kinds of new features.
They all want to retain users. At the same time, they are afraid of making public relations mistakes, especially now that governments around the world are closely monitoring what they are developing. A mistake can lead to (even) stricter rules. So they start carefully with new products.
“We’re rolling out these new features more slowly than usual,” Meta CEO Mark Zuckerberg said this week. The message came at the end of the presentation, which was packed with all kinds of new features in AI.
“Alexa, I’m cold.”
The same hesitation was heard at Amazon, which announced a new version of Alexa last week. This has been the company’s smart assistant for several years, and is set to be significantly updated with the arrival of generative AI. She’s on her way, but there will be one first Preview And only in the United States.
The promise is that Amazon’s voice assistant will soon turn on the heating based on, say, the comment “Alexa, I’m cold.” Or ask you to install lamps in the room in football club-themed colors, for example. The idea is that Alexa knows what room you’re in and what colors go with that club.
At a demonstration Amazon confirmed that Alexa, “just like a human,” can pick up the conversation again without having to call the smart assistant again. The technology giant sees the update as a “super assistant.” He writes the technology blog The VergeAnd remember, at some point they will get paid for it.
Watch a demo of Amazon’s Alexa update here:
It’s a delicate balance. No one wants to lag behind the latest developments. But an accident is often just around the corner.
Artificial intelligence voice
OpenAI dares to take the decision so far and thus challenge the rest. Although the company is also declining. This week, the company launched new functionality for ChatGPT. The app can speak and listen in English and currently for paying users only. It is based on technology that can create an “AI voice” from a few seconds of real material.
The company says it recognizes that this can easily be abused. For example, the American President declares war on a country. So, writes OpenAIIt is now possible to have conversations only in the app. Spotify also uses it to translate podcast episodes into languages other than the creator’s, but again to a very limited extent.
OpenAI has also made a previously announced update available, allowing ChatGPT to recognize the content of images. This was announced in March, but was not immediately made available due to concerns about misuse.
The company now dares to do so, but says it has taken technical measures that “significantly limit ChatGPT’s ability” to identify people, “because ChatGPT is not always accurate and these systems must respect people’s privacy.”
“Big questions cannot be solved simply.”
Emil Kramer, a professor and researcher in artificial intelligence at Tilburg University, believes that the developments are “not exciting from a technical perspective.” But according to him, technology is becoming closer to the user.
“The main and fundamental problems surrounding privacy, honesty and hate content cannot be easily solved in a few months,” emphasizes Jelle Zuidema, assistant professor of explainable artificial intelligence at the University of Amsterdam. In his view, companies seem to be “taking a step” before imposing all kinds of rules.
“Professional web ninja. Certified gamer. Avid zombie geek. Hipster-friendly baconaholic.”