this post was submitted on 10 Apr 2024
10 points (75.0% liked)

Futurology

1695 readers
62 users here now

founded 1 year ago
MODERATORS
top 5 comments
sorted by: hot top controversial new old
[–] [email protected] 10 points 5 months ago (3 children)

This sounds like marketing hype. Giving AI reasoning is a problem researchers have been failing to solve since Marvin Minsky in the 1960s, and there is still no fundamental breakthrough on the horizon. Even DeepMind's latest effort is tame; it just suggests getting AI to check itself more accurately against external sources.

[–] [email protected] 11 points 5 months ago* (last edited 5 months ago)

I think the recent LLMs are a breakthrough. Back in the 1970s people used expert systems. And you needed to put in lots and lots of if-then-rules by hand. Do the tedious job of creating knowledgebases and semantic connections... And the results were pretty limited. Nowadays you just feed in all the texts from the internet, do some magic and it'll autocomplete "This is how you change the oil in your car in 7 easy steps" and answer it. They can write emails or roleplay a helpful assistant. Translating text is something that works quite well. All of that was unthinkable 10 years ago and utter scifi. The transformer architecture from 2017 has been some good progress. And I don't see why they shouldn't improve in capabilities in the time to come. I think it'll be incremental improvement though, not a single paper and I wake up tomorrow and somebody invented sentient robots. All of that marketing claims are just hype in my opinion, too.

[–] [email protected] 2 points 5 months ago

there is still no fundamental breakthrough on the horizon.

I mean, we're currently in the midst of one, so that might be obscuring the horizon somewhat. Modern AIs are able to reason in ways that no AIs could previously, don't let the perfect be the enemy of the good.

[–] [email protected] 1 points 5 months ago

This sounds like marketing hype.

Yeah, this is exactly my thoughts aswell. If AI truly had reasoning capabilities, it would be global front page news.

[–] A_A 4 points 5 months ago

Hi Lugh,
thanks for this nice link (and article).

Researchers in the domain express increasing worryiness on this topic :

"... Reinforcement learning (RL) agents that plan over a long time horizon far more effectively than humans present particular risks. (...)

https://www.science.org/doi/10.1126/science.adl0625
(i don't have access to the full article)

My opinion is that we should hope and worry (at least a little bit).