this post was submitted on 04 Jan 2024
25 points (100.0% liked)
LocalLLaMA
2245 readers
11 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've generally tried to avoid Nvidia cards because binary blob drivers are a pain (especially as a FLOSS developer I occasionally need to build newer kernels). I believe the recent firmware changes mean the nouveau driver can now control clocking but I've no idea what the status is for CUDA which I assume you need to run the models.
They do look pretty affordable though ๐
If you go for it and need any help lemme know I've had good results with Linux and Nvidia lately :)