this post was submitted on 16 Jun 2024
-66 points (10.7% liked)

Technology

60091 readers
2758 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 15 comments
sorted by: hot top controversial new old
[–] pdxfed 15 points 6 months ago

And by super intelligence we mean connected to a lot of things and able to wreak significant havoc, but absolutely fucking worthless for complex thought.

#Skynub

[–] technocrit@lemmy.dbzer0.com 12 points 6 months ago

"scientist"

[–] Sanctus 9 points 6 months ago (2 children)

And what's the power consumption rates on these super AIs? Can we afford that with our power grids? Our current situation with existing AI is already getting murky.

[–] Heywaitaminute 6 points 6 months ago

Humanity: "Super AI, what's the best way to slow climate change?" Super AI: "Turn me off."

[–] SturgiesYrFase@lemmy.ml 4 points 6 months ago (1 children)

Don't worry, OpenAI has made a deal with a company that's making Fusion power also heavily connected to Sam Altman that hasn't actually done anything exceptional yet. So when they finally crack Fusion then AI can just be powered by that!

[–] Sanctus 3 points 6 months ago (1 children)

Sounds like me trying to plan my Dyson Sphere at the beginning of a Stellaris game.

[–] SturgiesYrFase@lemmy.ml 1 points 6 months ago

Been trying out the demo for a game called Airborne Empire. You should give it a go, you'll have more moments like this.

[–] brsrklf@jlai.lu 7 points 6 months ago* (last edited 6 months ago) (1 children)

Beneficial AGI Summit

Oh good, they're the ones who want a nice AI overlord.

[–] uriel238@lemmy.blahaj.zone 4 points 6 months ago

To be fair, current human overlords are presenting a strong case that human beings cannot govern themselves at large scale (e.g. more than 500 people in a society) so a nice, public-servicing AI overlord is a pretty good pipe dream.

I don't know if it's feasible at all, but man we'd be lucky if we made one.

[–] 0x0@programming.dev 6 points 6 months ago (1 children)

Right around the time of the linux desktop then.

[–] KISSmyOSFeddit 2 points 6 months ago (1 children)

The year of the Linux desktop is already here, just not the way the geeks hoped. Most people do their everyday computing on phones now, most phones run on a Linux kernel.
Windows 11 comes with WSL, and the entire OS is mostly a front-end for Microsoft's cloud services now, which run on Linux.

[–] 0x0@programming.dev 3 points 6 months ago

...none of those are desktops?

[–] tal@lemmy.today 4 points 6 months ago* (last edited 6 months ago) (1 children)

I mean, theoretically, yes.

I mean, it could be that some guy in his basement has been working on it in total secrecy and it shows up tomorrow.

But my guess is that the likely timeline is further out than either.

I seriously doubt that what we're going to see is a single "Eureka" moment that gives us both AGI and manages to greatly surpass humans.

I would expect to see a more-incremental process, where publicly-visible systems get closer and closer to approaching that. And what OpenAI and friends are doing isn't close. It's cool, is useful for a lot of things, but isn't a generalized system for solving problems.

Exactly. If you look at timelines for significant human achievement, you'll think innovation comes in waves. But if you zoom in a bit, it's really a bunch of ripples leading to pretty steady innovation.

For example, EVs exploded with Tesla, but they've been around for decades, they just didn't catch on. The innovation to get there was steady, but the adoption was quick once a viable product was available and marketed well.

The same is true for AI. I learned about generative AI in college over a decade ago, and the source material was also old (IIRC the old lisp machines were supposed to be used for AI). It exploded because it got just good enough to be viable, and it was marketed well. The actual innovation was quite gradual.

[–] autotldr@lemmings.world 1 points 6 months ago

This is the best summary I could come up with:


We may not have reached artificial general intelligence (AGI) yet, but as one of the leading experts in the theoretical field claims, it may get here sooner rather than later.

During his closing remarks at this year's Beneficial AGI Summit in Panama, computer scientist and haberdashery enthusiast Ben Goertzel said that although people most likely won't build human-level or superhuman AI until 2029 or 2030, there's a chance it could happen as soon as 2027.

After that, the SingularityNET founder said, AGI could then evolve rapidly into artificial superintelligence (ASI), which he defines as an AI with all the combined knowledge of human civilization.

Last fall, for instance, Google DeepMind co-founder Shane Legg reiterated his more than decade-old prediction that there's a 50/50 chance that humans invent AGI by the year 2028.

Best known as the creator of Sophia the humanoid robot, Goertzel has long theorized about the date of the so-called "singularity," or the point at which AI reaches human-level intelligence and subsequently surpasses it.

Then there's the assumption that the evolution of the technology would continue down a linear pathway as if in a vacuum from the rest of human society and the harms we bring to the planet.


The original article contains 524 words, the summary contains 201 words. Saved 62%. I'm a bot and I'm open source!