@kaia with amd you will have the issue that rocm is hot garbage (nobody knows why it even exists; intel already works on sycl with the kronos group) and nobody supports vulkan (though there have been some memes in attempt to get pytorch et all to do so, and kompute.cc does run on vulkan)
amd cards are quite capable there just isn't support for the way to actually use the hardware
@Moon@icedquinn they have ROCm and PyTorch support and whatnot. I don't quite understand. it can generate Stable Diffusion, but it's slow. I would hope Nvidia would be way faster. but I won't fork over $2k to find out :sadcat:
@kaia Pytorch is actually a specifically optimized use-case for tesla. It takes a dual-slot x16? pcie and 2x8pin power with obviously no display out. As far as I recall. If you can afford the power, it is an option. It might be useless outside of those jobs though, so be aware of that.
@kaia@icedquinn@Moon how fast do you want it? i saw in the other thread that you own a previous gen AMD Card, that should give you around 3-4 iterations per second for stable diffusion, thats like image generation in a few seconds
@Jain@icedquinn@Moon@guizzy yeah I have a RX 6800 XT and it would be a hassle to sell and get Nvidia instead. I would probably need a friend whom I don't wanna disturb to switch the cards.
I'd only do it for _significant_ benefit in SD and that's something I found no real statistics on.
@kaia@icedquinn@Moon@guizzy well then, as i said, your card should get 3-4 iterations, usually 20-30 iterations are enough to do something with, that would mean you would have to wait around 5-9 sec. Of course Nvidia would be faster but is it really worth to sell your card and get a new one to get a performance of idk 2-3 sec? https://cdn.mos.cms.futurecdn.net/iURJZGwQMZnVBqnocbkqPa-1200-80.png
@kaia@Jain@icedquinn@Moon One minute for a 1344x832 picture, on an SDXL-based checkpoint, 70 steps, no extra processing (no refiner, hires, fix, adetailer or controlnet). That's on an RTX 3070, maybe slowed down by also having part of an LLM loaded on it.