this post was submitted on 31 Jan 2025
66 points (91.2% liked)
Technology
61344 readers
3652 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Some fanboys have been buying 192gb macs for AI. It's one way to throw money in the bin
192GB RAM? I can see how it's convenient for really fast cached storage, or temporary, if we ignore the price.
Practically I'm not sure why people would need anything above 24GB. Especially with Apple prices.
It's for running AI on the GPU. You need a really expensive PC GPU to get more than like 16 GB of RAM or whatever, so the bottleneck for large AI models is swapping in and out data from system RAM over PCIe.
The Mac has an SoE with unified memory, where the GPU can access all 192 GB at full speed, which is perfect for AI workloads where you need the GPU to access all the RAM. There's always a tradeoff where the PC GPUs have faster processors (since they have a way bigger power budget), but the Mac GPU has faster memory access, so it's not always a slam-dunk which is better.
APUs/Integrated GPUs on PCs also have unified memory but they always targeted the low end so aren't as useful.
In order to run say something like deep-seek R1 the full fat version you need to stick something like eight Mac minis together.
Doesn't seem like a single device is really going to be capable long term, since it doesn't seem like it's capable right now.
If it's for AI, loading huge models is something you can do with Macs but not easily in any other way.
I'm not saying many people have a use case at all for them, but if you have a use case where you want to run 60 GB models locally, a whole 192GB Mac Studio is cheaper than the GPU alone you need to run that if you were getting it from Nvidia.
I've ran them on intel cpu's before. When putting a cpu with more than two memory channels and a several hundred watt power budget up to a beefed up mobile cpu, it's not a fair fight.
Second hand xeons are cheaper though
I'm talking about running them in GPU, which favours the GPU even when the comparison is between an AMD Epyc and a mediocre GPU.
If you want to run a large version of deepseek R1 locally, with many quantized models being over 50GB, I think the cheapest Nvidia GPU that fits the bill is an A100 which you might find used for 6K.
For well under that price you can get a whole Mac Studio with those 192 GB the first poster in this thread mentioned.
I'm not saying this is for everyone, it's certainly not for me, but I don't think we can dismiss that there is a real niche where Apple has a genuine value proposition.
My old flatmate has a PhD in NLP and used to work in research, and he'd have gotten soooo much use out of >100 GB of RAM accessible to the GPU.
I had found one for about 400 recently, a bit far away tho. I ended up going with a gpu closer by. I don't need that many gb's
Yep, unless you're working with huge amounts of stolen data a cloud solution is much cheaper. Or even just swapping to a standard gpu setup