this post was submitted on 21 Jan 2024
7 points (81.8% liked)
Hacker News
1770 readers
1 users here now
This community serves to share top posts on Hacker News with the wider fediverse.
Rules
0. Keep it legal
- Keep it civil and SFW
- Keep it safe for members of marginalised groups
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I feel like a broken record, but...
Seriously, the current large "language" models - or should I say, large syntax models? - are a technological dead end. They might find a lot of applications, but they certainly will not evolve to the "superhuman capabilities" from the tech bros' wet dreams.
In the best hypothesis, all that the self-instruction will do is to play whack-a-mole with hallucinations. In the worst it'll degenerate.
You'll need a different architecture to go meaningfully past that. Probably one that doesn't handle semantics as an afterthought, but instead as its own layer, a central and big one.