this post was submitted on 21 Jan 2024
7 points (81.8% liked)

Hacker News

1770 readers
1 users here now

This community serves to share top posts on Hacker News with the wider fediverse.

Rules0. Keep it legal

  1. Keep it civil and SFW
  2. Keep it safe for members of marginalised groups

founded 1 year ago
MODERATORS
 

There is a discussion on Hacker News, but feel free to comment here as well.

top 2 comments
sorted by: hot top controversial new old
[โ€“] TropicalDingdong 5 points 10 months ago

Its kind of funny, because right now, GPT4 doesn't even achieve GPT4 levels of performance.

Its a real issue with not having access to the underlying models/ how they were trained. We know they've repeatedly nerfed/ broken this model.

[โ€“] [email protected] 3 points 10 months ago

I feel like a broken record, but...

Seriously, the current large "language" models - or should I say, large syntax models? - are a technological dead end. They might find a lot of applications, but they certainly will not evolve to the "superhuman capabilities" from the tech bros' wet dreams.

In the best hypothesis, all that the self-instruction will do is to play whack-a-mole with hallucinations. In the worst it'll degenerate.

You'll need a different architecture to go meaningfully past that. Probably one that doesn't handle semantics as an afterthought, but instead as its own layer, a central and big one.