Each GPU+ installed driver setup has its own way of accessing the overall output, and there are many layers of optimization and translation in between, and that has an impact on how many things look.
Rendering with the program isn’t, and what you’re saying is pure math, determined by the source file/scene, and will give the same output on every CPU, or farm.
There is no need for developer intervention to make your content display well.
We are now starting to get fully GPU based tools.
This is a program to do eg simple quick or fx simulations.
You often export these assets or fx in a large 3d vfx project to a format so that you can view the cpu.
So that everything works together and appears properly together.
Sometimes there are exceptions to all of these things, they are not all black and white.
But we’re a lot further away from real-time rendering of high-quality impactful movies than some people think.
At the moment, we can provide parts of the production process in real time if we want.
The drawback is that setting up and managing teams and in terms of production and production constraints gives you more problems than benefits.
That’s why you see many projects getting paid by real-time projects like Epic/Unreal, to use their engine in production.
Then they do it for a shot or a sequence, because they got paid for it.
But not because it was the best or most practical option.
Dojo scene in the last matrix for example.
“Lifelong zombie fanatic. Hardcore web practitioner. Thinker. Music expert. Unapologetic pop culture scholar.”