this post was submitted on 06 Sep 2023
26 points (93.3% liked)
LocalLLaMA
2274 readers
5 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Sorry, not trying to come at you, but I’m just trying to provide a bit of fact checking. In this link, they tested on Windows which would have to be using DirectML which is super slow. Did Linus Tech Tips do this? Anyway, the cool kids use ROCm on Linux. Much, much faster.
Yeah that was what i was worried about after reading the article; I've heard about the different backends...
Do you have AMD + Linux + Auto111 / Ooobabooga? Can you give me some real-life feedback? :D
Haha, you're not, I definitely stumbled into this. These guys mainly build edit systems for post companies, so they stick to windows. Good to know about ROCm, got something to read up on.