this post was submitted on 28 Jan 2025
552 points (97.1% liked)

Technology

61233 readers
6150 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Alphane_Moon 32 points 2 days ago (4 children)

I get it. I just didn't know that they are already using "beyond AGI" in their grifting copytext.

[–] [email protected] 24 points 2 days ago (1 children)

Yeah, that started a week or two ago. Altman dropped the AGI promise too soon now he's having to become a sci-fi author to keep the con cooking.

[–] [email protected] 13 points 2 days ago (2 children)

now he's having to become a sci-fi author to keep the con cooking.

Dude thinks he’s Asimov but anyone paying attention can see he’s just an L Ron Hubbard.

[–] [email protected] 5 points 2 days ago

Hell, I'd help pay for the boat if he'd just fuck off to go spend the rest of his life floating around the ocean.

[–] [email protected] -3 points 2 days ago (1 children)

You say that like Hubbard wasn't brilliant, morals notwithstanding

[–] [email protected] 6 points 2 days ago

He sure as shit wasn't a brilliant writer. He was more endowed with the cunning of a proto-Trump huckster-weasel.

[–] [email protected] 5 points 2 days ago (1 children)

Well, it does make sense in that the time during which we have AGI would be pretty short because AGI would soon go beyond human-level intelligence. With that said, LLMs are certainly not going to get there, assuming AGI is even possible at all.

[–] [email protected] 3 points 2 days ago

We're never getting AGI from any current or planned LLM and ML frameworks.

These LLMs and ML programs are above human intelligence but only within a limited framework.

[–] MsPenguinette 4 points 2 days ago

https://en.m.wikipedia.org/wiki/Superintelligence#Feasibility_of_artificial_superintelligence

Artificial Superintelligence is a term that is getting banded about nowadays

[–] [email protected] 4 points 2 days ago

Ah ok, yeah the “beyond” thing us likely pulled straight out of the book I mentioned in my edit.