this post was submitted on 21 Oct 2024
27 points (100.0% liked)
Free Open-Source Artificial Intelligence
2900 readers
1 users here now
Welcome to Free Open-Source Artificial Intelligence!
We are a community dedicated to forwarding the availability and access to:
Free Open Source Artificial Intelligence (F.O.S.A.I.)
More AI Communities
LLM Leaderboards
Developer Resources
GitHub Projects
FOSAI Time Capsule
- The Internet is Healing
- General Resources
- FOSAI Welcome Message
- FOSAI Crash Course
- FOSAI Nexus Resource Hub
- FOSAI LLM Guide
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
fp8 would probably be fine, though the method used to make the quant would greatly influence that.
I don't know exactly how Ollama works but a more ideal model I would think would be one of these quants
https://huggingface.co/bartowski/Qwen2.5-Coder-1.5B-Instruct-GGUF
A GGUF model would also allow some overflow into system ram if ollama has that capability like some other inference backends.
Ollama does indeed have the ability to share the memory between VRAM and RAM, but I always assumed it wouldn't make sense, since it would massively slow down the generation.
I think ollama already uses GGUF, since that is how you import the model from HF to ollama anyway, you gotta use the *.GGUF file.
As someone who has experience with shader development in glsl, I know very well that communication between the GPU and CPU is super slow, and sending data from the GPU to the CPU is a pretty heavy task. So I just assumed it wouldn't make any sense. I will try a full 7B model (fp16) model now using my 32GB of normal RAM to check out the speed. I'll edit this comment once I'm done and share results
With modern methods sometimes running a larger model split between GPU/CPU can be fast enough. Here's an example https://dev.to/maximsaplin/llamacpp-cpu-vs-gpu-shared-vram-and-inference-speed-3jpl
oooh a windows only feature, now I see why I haven't heard of this yet. Well, too bad I guess. It's time to switch to AMD for me anyway...
Article is written in a bit confusing way, but you'll most likely want to turn off Nvidia's automatic VRAM swapping if you're on Windows, so it doesn't happen by accident. Partial offloading with llama.cpp is much faster AFAIK if you want to split the model between GPU and CPU, and it's easier to find how many layers you can offload if it fails to load instead when you set it too high.
Also if you want to experiment partial offload, maybe a 12B around Q4 would be more interesting than the same 7B model with higher precision? I haven't checked if anything new has come out the last couple of months, but Mistral Nemo is fairly good IMO, though you might need to limit context to 4k or something.
Oh, that part is. But the splitting tech is built into llama.cpp