this post was submitted on 27 Jan 2024
23 points (96.0% liked)
LocalLLaMA
2274 readers
4 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
To do general purpose GPU calculations on AMD hardware you need a GPU that is supported by ROCm (AMD's equivalent to CUDA). Most of the gaming GPUs are not.
There is a list here but be aware that that is for the latest rocm version, some tools might still use older versions with different supported devices.
https://rocm.docs.amd.com/projects/install-on-linux/en/latest/reference/system-requirements.html#supported-gpus
Has that changed recently? I've ran ROCm successfully on an RX6800. I seem to recall that was supported, the host OS (Arch) was not.
When I tried it maybe a year or so ago there were four supported chipset in that version (5.4.2 I think) of rocm but I don't remember which card models those were since they were only specified in that internal chip name. Mine wasn't supported at the time (5700XT)
No, GFX1030 is still supported.