this post was submitted on 14 Sep 2024
603 points (97.8% liked)
Technology
59607 readers
3435 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Or just straight up install https://ollama.com
I like Ollama, and recommend it to tinker, but I admit this "LLM Explorer" is quite neat thanks to sections like "LLMs Fit 16GB VRAM"
Ollama just works but it doesn't help to pick which model best fits your needs.
What is the need I have to put the effort in to install all this locally. Websites win in terms of convenience.
I don't think I understand your point, are you saying there is no benefit in running locally and that Websites or APIs are more convenient?
I already have stable diffusion on a local machine. I was trying to find motivation to install a LLM locally. You answered my question in a different response
I want to work on my stuff in peace and in private without worrying about a company grabbing my stuff and using it for themselves and to give/sell it to other outfits, including the government. "If you have nothing to hide..." is bullshit and needs to die.
Good point. Everything you feed into chatgpt is stored for future reference.