this post was submitted on 14 Feb 2024
482 points (97.4% liked)

Technology

61039 readers
5331 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] pennomi 11 points 11 months ago (2 children)

Pretty easy to roll your own with Kobold.cpp and various open model weights found on HuggingFace.

[–] TipRing 8 points 11 months ago

Also for an interface, I'd recommend KoboldLite for writing or assistant and SillyTavern for chat/RP.

[–] [email protected] 4 points 11 months ago (1 children)

I tried oobabooga and it basically always crashes when I try to generate anything, no matter what model I try. But honestly, as far as I can tell all the good models require absurd amounts of vram, much more than consumer cards have, so you'd need at least like a small gpu server farm to local host them reliably yourself. Unless of course you want like practically nonexistent context sizes.

[–] [email protected] 4 points 11 months ago

You'll want to use a quantised model on your GPU. You could also use the CPU and offload some parts to the GPU with llama.cpp (an option in oobabooga). Llama.cpp models are in the GGUF format.