this post was submitted on 23 Nov 2024
298 points (99.7% liked)
Technology
59675 readers
4666 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
During that time, you can easily install Ollama on an old computer.
With a client like Oatmeal, you can save your session/ reload/delete as you wish; so your model remembers what you want.
I am running llama3.1:8b, it's good enough for the day-to-day operations.
My old computer is apparently "not good enough" for windows 11, but it's surely good enough for my personal AI running on Linux though!
I tried llama3.1:8b and it's absolutely horrible.
You can use larger "open" models through free or dirt-cheap APIs though.
TBH local LLMs are still kinda "meh" unless you have a high vram GPU. I agree that 8b is kinda underwhelming, but the step up to like Qwen 14B is enormous.