this post was submitted on 05 Feb 2024
64 points (82.0% liked)
Technology
59778 readers
4723 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
PSA: give open-source LLMs a try folks. If you're on Linux or macOS, ollama makes it incredibly easy to try most of the popular open-source LLMs like Mistral 7B, Mixtral 8x7B, CodeLlama etc... Obviously it's faster if you have a CUDA/ROCm-capable GPU, but it still works in CPU-mode too (albeit slow if the model is huge) provided you have enough RAM.
You can combine that with a UI like ollama-webui or a text-based UI like oterm.
I spent the better part of a day trying to setup llama c++ with "wizard vicuna unrestricted" and was unable to, and I've got quite a tech background. This was at someone's suggestion, I'm hoping yours is easier lol.
ollama should be much easier to setup!
Thanks lol I'm looking forward to it so I can stop contributing to openai