this post was submitted on 06 Sep 2023
26 points (93.3% liked)
LocalLLaMA
2274 readers
1 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I only ever used 7b large language models on my RX 6950 XT but PyTorch had or still has some nasty AMD VRAM bugs which didn't fully utilized all of my VRAM (more like only a quarter of it)
it seems the sad truth is high performance/training of models are just not good on AMD cards as of now
Interesting
Do you only use LLMs or also stable diffusion ?