I think that’s just a feature of exactly what’s going on right now. You create an image, just as if you were drawing it or using CGI, it is an attempt to capture reality.
We are very well trained to recognize certain things that should all be parameters in those processes. Some are very obvious, others are not always quantifiable to someone who realizes what they are.
CGI (as in rendering a 3D scene) has an advantage here because many parameters are present and interdependent. Think ray tracing and materials.
AI works from a completely different angle and tries to guess pixels, so that there is no connection between things in reality. The likelihood that things will not be right increases dramatically as the picture becomes more complex. For example, lighting is something that affects everything, and something wrong somewhere is immediately noticeable to most people.
And then, of course, there’s a big pitfall in AI that we’re slowly moving toward: training data. As I said, AI art has a recognizable style (especially for non-photorealistic images), and there is a good chance that the next iteration of the training data will contain the same images that were generated before. After all, a number of sites have now entered into agreements to use them as a source that also contains those images.
The result: a feedback loop that can become increasingly larger.
AI Image -> Training -> New AI Image -> Return to Training.
More people use it, everyone gets template images and comes back to them, so the bias towards those images increases.
If we all stop making things ourselves and use AI for everything, we will stand still.
“Professional web ninja. Certified gamer. Avid zombie geek. Hipster-friendly baconaholic.”