Conversation
Notices
-
Embed this notice
How realistic would it be to convert average graphics-related GPU computational tasks to analog computing accelerator cards?
Certainly the entirely of ray tracing & lighting could go that way, but is that most of the load? Is the rest of the load convertable without completely ruining the results?
I have no doubt that one would still end-up requiring both a digital GPU and the analog accelerator, but how much would it alleviate things?
What brings this to mind is the notion of things like the 4090 using 450W to work. That's a ridiculous amount of power. That's literally more than my lab normally uses (even during scheduled disk scrubs).
-
Embed this notice
@lispi314 well up to, they don't use the full TDP all the time.
raytracing is kind of cursed though. it requires doing a lot of math and there really aren't clever ways around it.
-
Embed this notice
@lispi314 most of the gpu chips are built around just doing a lot of equations and not making decisions, though some of those these days are hundreds of megahertz. around 2016 some demoscener was writing about how a then current nvidia card was roughly equivalent to 300 desktop computers in the 90s.
although for raytracing you do have to make decisions, quite often. there's a lot of 'line vs bounding box' followed by 'line vs triangle' tests, so i don't know what good an analogue system would do. analogue raytracing is just called a video camera :blobcatcamera:
to some extent games are just badly coded because nobody really cares (they only care about console performance, PC users are pretty much a throwaway market.) middling card today, even intel ones, are good enough for most JRPG graphics when the developers put in a minutae of optimization for the platform.
it might help if we democratized the silicon, but nobody is really working on that either. there's scant projects like that (mostly j-core comes to mind, where they re-created the Super-H chips from expired patents, and built it in the GNU verilog tools, so they have a whole processor built using FOSS and formally verified correctness), but manufacturing is still boned.
you could possibly try stringing together jcores (that's all gpu processors are; cpus stuffed in arrays with shared input/output pipes and their branch processors are crippled) to replicate the programmable pipeline, idk how you'd mod it to do raytracing, but ultimately Nvidia is paid to shit out pretty graphics and nothing else. its AMD that makes more efficient stuff. and we aren't paying a third party to work it out.
-
Embed this notice
@lispi314 make them use 600$ computers and they wil
-
Embed this notice
@icedquinn > to some extent games are just badly coded because nobody really cares (they only care about console performance, PC users are pretty much a throwaway market.) middling card today, even intel ones, are good enough for most JRPG graphics when the developers put in a minutae of optimization for the platform.
It certainly would be nice if most devs cared even half as much as the Doom Eternal ones.
-
Embed this notice
@Reiddragon @lispi314 the funny part is that the cycles team is probably making the best use of RTX chips.
they adapted it to use them for faster rendering.
-
Embed this notice
@lispi314 @icedquinn reminds me of when Crytek did real time raytracing on a Vega 56 for the lulz, but then most commercial games that used DirectX raytracing ran like dogshit on a 1080Ti once nvidia finally allowed it (tho again, it could also be nvidia made it run like shit so they could sell more 2080s)