Embed Notice
HTML Code
Corresponding Notice
- Embed this noticeHow realistic would it be to convert average graphics-related GPU computational tasks to analog computing accelerator cards?
Certainly the entirely of ray tracing & lighting could go that way, but is that most of the load? Is the rest of the load convertable without completely ruining the results?
I have no doubt that one would still end-up requiring both a digital GPU and the analog accelerator, but how much would it alleviate things?
What brings this to mind is the notion of things like the 4090 using 450W to work. That's a ridiculous amount of power. That's literally more than my lab normally uses (even during scheduled disk scrubs).