this post was submitted on 25 Apr 2024
1093 points (99.2% liked)

Memes

45895 readers
1336 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 50 points 8 months ago* (last edited 8 months ago) (2 children)

The quantized model you can run locally works decently and they can't read any of it, which is nice.

I use that one specifically https://huggingface.co/lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q4_K_M.gguf

If you're looking for a relatively user-friendly software to use it, you can look at GPT4All (open source) or LM Studio.

[–] [email protected] 16 points 8 months ago

If you're ready to tinker a bit i can recommend Ollama for the backend and Open web UI for the frontend. They can also both run on the same machine.

The advantage is that you can use your GPU to compute, which is a lot faster.

[–] nialv7 12 points 8 months ago (1 children)

Pretty sure LM studio is not open source

[–] [email protected] 7 points 8 months ago

You're right, I thought they were but I checked their GitHub and LM Studio itself isn't.