this post was submitted on 30 Sep 2023
11 points (100.0% liked)

Stable Diffusion

4320 readers
32 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

As per the title, does anyone know a good guide for installing and running SDXL on Fedora with an AMD CPU and gpu?

top 9 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 1 year ago (1 children)
[–] [email protected] 1 points 1 year ago (1 children)

Automatic1111 is amazing, isn't it?

[–] [email protected] 2 points 1 year ago
[–] [email protected] 2 points 1 year ago (1 children)

What GPU do you have? Rocm only runs on certain AMD GPUs if I remember correctly.

[–] [email protected] 2 points 1 year ago (3 children)

I have a 7900 xtx. I'm such a noob at this. What actually is the Rocm?

[–] he29 3 points 1 year ago

ROCm is basically AMD's answer to CUDA. Just (as usual) more open, less polished, and harder to use. Using something called HIP, CUDA application can be translated to work with ROCm instead (and therefore run on AMD cards without a complete rewrite of the app).

AFAIK they started working on it 6 or 7 years ago as the replacement for OpenCL. Not sure why exactly, but OpenCL apparently wasn't getting enough traction (and I think Blender even recently dropped OpenCL support).

After all the time, the HW support is still spotty (mostly only supporting the Radeon Pro cards, and still having no proper support for RDNA3 I think), and the SW support focuses mainly on Linux (and only three blessed distros, Ubuntu, RHEL and SuSe get official packages, so it can be pain to install anywhere else due to missing or conflicting dependencies).

So ROCm basically does work, and keeps getting better, but nVidia clearly has a larger SW dev team that makes the CUDA experience much more polished and painless.

[–] [email protected] 1 points 1 year ago

Rocm is the Radeon Open Compute ... Something begining with m. It's the libraries that AMD write to divert workloads to your GPU.

I have a 7600 and I'm finding things a bit bleeding edge (on Linux). I highly recommend making sure you're on the latest 7.6.1 of Rocm and make sure the version of pytorch you're using is a recent nightly that uses 7.6. earlier versions didn't support the 7xxx series.

If you're on windows, I think thinks can work differently.

[–] [email protected] 1 points 1 year ago

Oh boy you're in for a nice learning time.

[–] [email protected] 0 points 1 year ago

Have you tried Automatic1111? https://github.com/AUTOMATIC1111/stable-diffusion-webui

Once you have the "environment" set up you can download checkpoints from huggingface https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0