this post was submitted on 30 Jan 2024
505 points (93.5% liked)

Technology

59715 readers
6102 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 10 months ago (2 children)

Use local and open source models if you care about privacy.

[–] [email protected] 8 points 10 months ago

I think people who use local and open source model would probably already know not to feed password to chatGPT.

[–] [email protected] 7 points 10 months ago* (last edited 8 months ago) (1 children)

I absolutely agree. Use somthing like ollama. do keep in mind that it takes a lot of compiting resources to run these models. About 5GB ram and about 3GB filesize for the smaller sized ollama-unsensored.

[–] [email protected] 1 points 10 months ago (1 children)

It's not great, but an old GTX GPU can be had cheaply if you look around refurb, as long as there is a warranty, you're gold. Stick it into a 10 year old Xeon workstation off eBay, you can have a machine with 8 cores, 32GB RAM and a solid GPU cheaply under $200 easily.

[–] [email protected] 1 points 10 months ago* (last edited 10 months ago) (1 children)

Its the RAM requirement that stings rn, I beleave ive got the specs but was told or misremember a 64 GB ram requirement for a model.

[–] [email protected] 0 points 10 months ago

IDK what you've read, but I have 24GB and can use Dreambooth and fine-tune Mistral no problem. RAM is only required to load the model briefly before it's passed to VRAM iirc, and that's the main deal, you need 8GB VRAM as an absolute minimum, even my 24GB VRAM is often not enough for some high end stuff.

Plus RAM is actually really cheap compared to a GPU. Remember it doesn't have to be super fancy RAM either, DDR3 is fine if you're not gaming on a like a Ryzen or something modern