this post was submitted on 12 Jun 2023
16 points (100.0% liked)
LocalLLaMA
2245 readers
11 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It also really depends on VRAM IMO, I have a 4090 and these days I don't tend to touch anything under 30B (Wizard Uncensored is really good here) if I had dual 3090s I would likely be running a 65B model.