this post was submitted on 04 Jan 2025
20 points (100.0% liked)
PC Master Race
15153 readers
1 users here now
A community for PC Master Race.
Rules:
- No bigotry: Including racism, sexism, homophobia, transphobia, or xenophobia. Code of Conduct.
- Be respectful. Everyone should feel welcome here.
- No NSFW content.
- No Ads / Spamming.
- Be thoughtful and helpful: even with ‘stupid’ questions. The world won’t be made better or worse by snarky comments schooling naive newcomers on Lemmy.
Notes:
- PCMR Community Name - Our Response and the Survey
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I went from a 7k i7 to a 12k a year and a half back. It was a massive difference overall. I got an AI machine and run Linux with no gaming. I'm not sure when the internet hardware went to the multi threaded thing but that was huge by comparison. I can pull 10gb in just a few minutes at most when it took around an hour on the old machine, same internet connection too and the same rather old OpenWRT router.
With 64gb of DDR5, 20 logic threads, and a 16GB GPU, I can run much larger quantized LLMs and diffusion models than most people get to play with. If I cared about AI stuff, I would definitely get a 24gb+ GPU, as many logical cores as I could reasonably afford, and as much of the fastest memory I can fit. No joke, and no skin in the shill space. I just wish I could run Flux cooler and faster for diffusion. And I would love to try out Command R and other even larger models, but I'm limited by total processors. Llama.cpp will split LLM loads between the CPU and GPU automatically, so yeah I'm turning all of this up to 11 regularly. Skip on Intel 13k-14k and the latest as junk. 12k on Intel is the last decent reliable hardware.