Embed Notice
HTML Code
Corresponding Notice
- Embed this notice@lispi314 most of the gpu chips are built around just doing a lot of equations and not making decisions, though some of those these days are hundreds of megahertz. around 2016 some demoscener was writing about how a then current nvidia card was roughly equivalent to 300 desktop computers in the 90s.
although for raytracing you do have to make decisions, quite often. there's a lot of 'line vs bounding box' followed by 'line vs triangle' tests, so i don't know what good an analogue system would do. analogue raytracing is just called a video camera :blobcatcamera:
to some extent games are just badly coded because nobody really cares (they only care about console performance, PC users are pretty much a throwaway market.) middling card today, even intel ones, are good enough for most JRPG graphics when the developers put in a minutae of optimization for the platform.
it might help if we democratized the silicon, but nobody is really working on that either. there's scant projects like that (mostly j-core comes to mind, where they re-created the Super-H chips from expired patents, and built it in the GNU verilog tools, so they have a whole processor built using FOSS and formally verified correctness), but manufacturing is still boned.
you could possibly try stringing together jcores (that's all gpu processors are; cpus stuffed in arrays with shared input/output pipes and their branch processors are crippled) to replicate the programmable pipeline, idk how you'd mod it to do raytracing, but ultimately Nvidia is paid to shit out pretty graphics and nothing else. its AMD that makes more efficient stuff. and we aren't paying a third party to work it out.