There were regular errors in the articles, some parts of the text seemed plagiarized and it was generally not clear to readers which articles were not written by humans.
Three things have been mentioned, but I think there are only two that are problematic here.
Errors in the article:Although fact-checkers and editors looked at it later“
… It seems to me that not enough fact-checkers have been published, or they have not looked at it critically enough. Now this is very difficult. I recently did a little test with ChatGPT, something like “Write a piece about how smart pigs are” – which came up with a nice piece about how pigs use mirrors to find food. However, the studies that have been done show that in general it is number Use mirrors. Well, this means that as a fact-checker, you should check all of these statements, and see if there is a study that can support each statement. And that while the ChatGPT article, if you read it quickly, came out quite convincing. Anyway: more fact-checkers, who also have to be very critical of all statements that are made.
Plagiarism: This could just be inherent in the way the AI learns. I think uploading a lot of articles and then the output will be a mixture of what you’ll input. It seems hard to prevent – at most you could use software that checks if the AI has ‘committed’ plagiarism.
And if sufficient fact-checking is carried out, and if measures are taken against plagiarism, it seems to me that the third point “the blurring of articles not written by people” should not be a problem: in the end it comes down to the content, not who the author is. Now you want to know if it was written by a human, because that seems more reliable because of those first two points.
Ultimately, such an AI could be a perfectly useful writing tool — but you shouldn’t (for now) give it a free hand.
“Lifelong zombie fanatic. Hardcore web practitioner. Thinker. Music expert. Unapologetic pop culture scholar.”