Conversation
Notices
-
Embed this notice
SunMcNukes (sunmcnukes@norwoodzero.net)'s status on Saturday, 24-Feb-2024 21:24:48 JST SunMcNukes
@BowsacNoodle cc @lonelyowl13
RE: https://poa.st/objects/82011272-0073-492e-bb97-ada323b15539-
Embed this notice
Owl! 🦉 (lonelyowl13@annihilation.social)'s status on Saturday, 24-Feb-2024 21:24:48 JST Owl! 🦉
@sunmcnukes @BowsacNoodle
I could be mistaken of course, but.....
Probably because most ai engineers work with high level sdk like tensorflow or pytorch, and they already support any backends for gpu. Especially pytorch which can even run on vulkan. So it might be more efficient for intel and amd to contribute to pytorch to provide proper rocm/oneapi support, instead of creating a cuda compatibility layer.
I believe the original purpose of creating this thing was to support nvidia's proprietary ai tools like dlss/dlaa, and that's a pretty small market.BowserNoodle ☦️ likes this.
-
Embed this notice