Yes! This is precisely what I think is happening. The power needed to _train_ a model is mistaken for having anything to do with using said model. Those are two completely different things.
Once a model has been trained it can be used infitinitely many times to generate images - and the generation of one image on my regular NVidia A2000 needs just a few seconds of GPU usage.
Playing a game uses the GPU the whole time.