this post was submitted on 10 Jun 2023
13 points (100.0% liked)

LocalLLaMA

2244 readers
11 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

I’ve been using llama.cpp, gpt-llama and chatbot-ui for a while now, and I’m very happy with it. However, I’m now looking into a more stable setup using only GPU. Is this llama.cpp still still a good candidate for that?

you are viewing a single comment's thread
view the rest of the comments
[–] gh0stcassette 1 points 1 year ago* (last edited 1 year ago) (2 children)

Llama.cpp recently added CUDA acceleration for generation (previously only ingesting the prompt was GPU accelerated), and in my experience it's faster than GPTQ unless you can fit absolutely 100% of the model in VRAM. If literally a single layer is CPU offloaded, the performance in GPTQ immediately becomes like 30-40% worse than an equivalent CPU offload with llama.cpp

[–] [email protected] 2 points 1 year ago (1 children)

Haven't been able to test that out, but saw the change. Particularly interesting for my use case.

[–] gh0stcassette 1 points 1 year ago* (last edited 1 year ago)

What use case would that be?

I can get like 8 tokens/s running 13b models in q_3_k_L quantization on my laptop, about 2.2 for 33b, and 1.5 for 65b (I bought 64gb of RAM to be able to run larger models lol). 7B was STUPID fast because the entire model fits inside my (8gb) GPU, but 7b models mostly suck (wizard-vicuna-uncensored is decent, every other one I've tried was Not).