this post was submitted on 01 Jun 2024
105 points (88.9% liked)

Technology

59674 readers
4285 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 31 points 5 months ago (6 children)

We invented multi bit models so we could get more accuracy since neural networks are based off human brains which are 1 bit models themselves. A 2 bit neuron is 4 times as capable as a 1 bit neuron but only double the size and power requirements. This whole thing sounds like bs to me. But then again maybe complexity is more efficient than per unit capability since thats the tradeoff.

[–] [email protected] 39 points 5 months ago

Human brains aren't binary. They send signals in lot of various strength. So "on" has a lot of possible values. The part of the brain that controls emotions considers low but non zero level of activation to be happy and high level of activation to be angry.

It's not simple at all.

[–] Wappen 25 points 5 months ago (1 children)

Human brains aren't 1 bit models. Far from it actually, I am not an expert though but I know that neurons in the brain encode different signal strengths in their firing frequency.

[–] kromem 10 points 5 months ago* (last edited 5 months ago) (1 children)

The network architecture seems to create a virtualized hyperdimensional network on top of the actual network nodes, so the node precision really doesn't matter much as long as quantization occurs in pretraining.

If it's post-training, it's degrading the precision of the already encoded network, which is sometimes acceptable but always lossy. But being done at the pretrained layer it actually seems to be a net improvement over higher precision weights even if you throw efficiency concerns out the window.

You can see this in the perplexity graphs in the BitNet-1.58 paper.

[–] lunar17 6 points 5 months ago (1 children)

None of those words are in the bible

[–] kromem 2 points 5 months ago* (last edited 5 months ago)

No, but some alarmingly similar ideas are in the heretical stuff actually.

[–] buzz86us 4 points 5 months ago

We need to scale fusion

[–] [email protected] 2 points 5 months ago

Multi bits models exist because thats how computers work, but there's been a lot of work to use e.g. fixed point over floating for things like FPGAs, or with shorter integer types, and often results are more than good enough.