this post was submitted on 23 Nov 2024
298 points (99.7% liked)

Technology

59675 readers
4666 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] PushButton 15 points 4 days ago (4 children)

During that time, you can easily install Ollama on an old computer.

With a client like Oatmeal, you can save your session/ reload/delete as you wish; so your model remembers what you want.

I am running llama3.1:8b, it's good enough for the day-to-day operations.

  • Need for a spyware: 0
  • Need to take screenshots of my desktop: 0
  • Need to buy another computer for the hype chipset: 0
  • Need of Microsoft bullshit: 0

My old computer is apparently "not good enough" for windows 11, but it's surely good enough for my personal AI running on Linux though!

[–] x00z 0 points 3 days ago (1 children)

I tried llama3.1:8b and it's absolutely horrible.

[–] brucethemoose 1 points 2 days ago* (last edited 2 days ago)

You can use larger "open" models through free or dirt-cheap APIs though.

TBH local LLMs are still kinda "meh" unless you have a high vram GPU. I agree that 8b is kinda underwhelming, but the step up to like Qwen 14B is enormous.

load more comments (2 replies)