Just as people sometimes search for themselves, I couldn’t resist “gpt” myself and my colleagues. ChatGPT, Bing, Bard, and other language models are not fact machines, but I still find things that stand out. A chatbot built on GPT-3 said Colleague Jourian, known for game review and podcasting, I owned a movie house and won movie awards for our work
GPT-4 should be better, and although ChatGPT didn’t give me any awards with this release, he told me Jurian had actually been working for another company for five years as a “content strategist”. It seemed like he never told anyone about that profession, or it simply wasn’t true.
Anyone who works with chatbots often encounters this. GPT based generators form facts, conditions or people all the time. It is simply not an encyclopedia. It is a language model that attempts to formulate answers by calculating the next most logical word based on context. But why does he make things up? And can anything be done about it? It’s time to take a look at one of the big downsides to AI generators, which is that you can’t trust what they say.
“Professional web ninja. Certified gamer. Avid zombie geek. Hipster-friendly baconaholic.”