this post was submitted on 21 Oct 2024
27 points (100.0% liked)
Free Open-Source Artificial Intelligence
2895 readers
1 users here now
Welcome to Free Open-Source Artificial Intelligence!
We are a community dedicated to forwarding the availability and access to:
Free Open Source Artificial Intelligence (F.O.S.A.I.)
More AI Communities
LLM Leaderboards
Developer Resources
GitHub Projects
FOSAI Time Capsule
- The Internet is Healing
- General Resources
- FOSAI Welcome Message
- FOSAI Crash Course
- FOSAI Nexus Resource Hub
- FOSAI LLM Guide
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Mixtral in particular runs great with partial offloading, I used a Q4_K_M quant while only having 12GB VRAM.
To answer your original question I think it depends on the model and use case. Complex logic such as programming seems to suffer the most from quantization, while RP/chat can take much heaver quantization while staying coherent. I think most people think quantization around 4-5 bpw gives the best value, and you really get diminishing returns over 6 bpw so I know few who thinks it's worth using 8 bpw.
Personally I always use as large models as I can. With Q2 quantization the 70B models I've used occasionally give bad results, but often they feel smarter than 35B Q4. Though it's ofc. difficult to compare models from completely different families, e.g. command-r vs llama, and there are not that many options in the 30B range. I'd take a 35B Q4 over a 12B Q8 any day though, and 12B Q4 over 7B Q8 etc. In the end I think you'll have to test yourself, and see which model and quant combination you think gives best result at the inference speed you consider usable.
Pulled an 7B Q4 model just now an woah, yeah, they really are a lot better. I guess the smaller models really are just for devices with less than 1 GB of RAM to spare... Like ma phone, which runs Llama3.2 3B just fine...