this post was submitted on 01 Dec 2024
99 points (81.5% liked)
Technology
59877 readers
4334 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Looks like it has 32B in the name, so enough RAM to hold 32 billion weights plus activations (current values for the layer being run right now, which I think should be less than a gigabyte). It is probably made of 16 bit floats to start with, so something like 64 gigabytes, but if you start quantizing it to cram more weights into fewer bits, you can go down to like 4 bits per weight, or more like 16 gigabytes of memory to run (a slightly worse version of) the model.
So you're telling me there's a chance.
I think there are consumer-grade GPUs that can run this on a single card with enough quantization. Or if you want to run it on CPU you can buy and plug in enough DIMMs if you have an only somewhat large amount of money.
Pulled whatever is available on Ollama by this name and it seems to just fit on a 3090. Takes 23GB VRAM.