this post was submitted on 27 Jun 2024
6 points (87.5% liked)
LocalLLaMA
2274 readers
1 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
They can be, I suppose. However, the AI libraries that I was tinkering with seemed to all be based around Ubuntu and Nvidia. With Docker, GPU passthrough is much better under Linux and Nvidia.
WSL improved things a bit after I got an older GTX 1650. For my AMD GPU, ROCm support is (was?) garbage under Windows using either Docker or WSL. I don't remember having much difficulty with Nvidia drivers though... I think there might have been some strange dependency problems I was able to work through though.
AMD GPU passthrough on Windows to Docker containers was a no-go. I remember that fairly clear though.
My apologies. It has been a few months since I messed with this stuff.