this post was submitted on 14 Feb 2024
157 points (93.4% liked)
Technology
60029 readers
3927 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Shame they leave GTX owners out in the cold again.
2xxx too. It's only available for 3xxx and up.
Just use Ollama with Ollama WebUI
The whole point of the project was to use the Tensor cores. There are a ton of other implementations for regular GPU acceleration.
There were CUDA cores before RTX. I can run LLMs on my CPU just fine.
There are a number of local AI LLMs that run on any modern CPU. No GPU needed at all, let alone RTX.
This statement is so wrong. I have Ollama with llama2 dataset running decently on a 970 card. Is it super fast? No. Is it usable? Yes absolutely.
Source?