@kaia It used to be a bit harder, but I think now you basically just have to follow https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs#running-natively
You have to install a bunch of rocm packages, I have these installed
rocm-clang-ocl 5.6.1-1 rocm-cmake 5.6.1-1 rocm-core 5.6.1-1 rocm-device-libs 5.6.1-1 rocm-hip-libraries 5.6.1-1 rocm-hip-runtime 5.6.1-1 rocm-hip-sdk 5.6.1-1 rocm-language-runtime 5.6.1-1 rocm-llvm 5.6.1-1 rocm-opencl-runtime 5.6.1-1 rocm-smi-lib 5.6.1-1 rocminfo 5.6.1-1I also have python 3.11.5 installed via the AUR, since arch already upgraded to 3.12, but pytorch requires 3.11. So I created the venv with python3.11 -m venv to make sure it used 3.11 for the virtual environment. I also changed the url for rocm from the guide from https://download.pytorch.org/whl/rocm5.1.1 to https://download.pytorch.org/whl/rocm5.4.2 (apparently there’s now already rocm5.6, but I haven’t tried that)
For launching I just use this shell script:
#/bin/bash source venv/bin/activate git pull TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.4.2'; python launch.py --opt-sub-quad-attention --no-half-vae --apiLastly I have these variables in /etc/environment
HSA_OVERRIDE_GFX_VERSION=10.3.0 MIOPEN_DEBUG_COMGR_HIP_PCH_ENFORCE=0 PATH=/opt/rocm/bin:/opt/rocm/llvm/bin LLVM_PATH=/opt/rocm/llvm ROCM_PATH=/opt/rocmI’m not sure if they’re still required though. You can check whether the virtual environment is set up correctly by just opening a python shell and then running
import torch torch.cuda.is_available()GNU social JP is a social network, courtesy of GNU social JP管理人. It runs on GNU social, version 2.0.2-dev, available under the GNU Affero General Public License.
All GNU social JP content and data are available under the Creative Commons Attribution 3.0 license.