this post was submitted on 01 Jun 2024
105 points (88.9% liked)
Technology
59710 readers
5729 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The network architecture seems to create a virtualized hyperdimensional network on top of the actual network nodes, so the node precision really doesn't matter much as long as quantization occurs in pretraining.
If it's post-training, it's degrading the precision of the already encoded network, which is sometimes acceptable but always lossy. But being done at the pretrained layer it actually seems to be a net improvement over higher precision weights even if you throw efficiency concerns out the window.
You can see this in the perplexity graphs in the BitNet-1.58 paper.
None of those words are in the bible
No, but some alarmingly similar ideas are in the heretical stuff actually.