this post was submitted on 19 Jul 2024
440 points (98.5% liked)

Technology

59982 readers
3902 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] EliteDragonX 107 points 5 months ago (2 children)

I think OpenAI knows that if GPT-5 doesn’t knock it out of the park, then their shareholders won’t be happy, and people will start abandoning the company. And tbh, i’m not expecting miracles

[–] bappity 85 points 5 months ago (3 children)

over the time of chatgpt's existence I've seen so many people hype it up like it's the future and will change so much and after all this time it's still just a chatbot

[–] EliteDragonX 38 points 5 months ago (2 children)

Exactly lol, it’s basically just a better cleverbot

[–] [email protected] 19 points 5 months ago (1 children)
[–] EliteDragonX 44 points 5 months ago (3 children)

It’s actually insane that there are huge chunks of people expecting AGI anytime soon because of a CHATBOT. Just goes to show these people have 0 understanding of anything. AGI is more like 30+ years away minimum, Andrew Ng thinks 30-50 years. I would say 35-55 years.

[–] [email protected] 37 points 5 months ago* (last edited 5 months ago) (2 children)

At this rate, if people keep cheerfully piling into dead ends like LLMs and pretending they're AI, we'll never have AGI. The idea of throwing ever more compute at LLMs to create AGI is "expect nine women to make one baby in a month" levels of stupid.

[–] [email protected] 17 points 5 months ago (3 children)

People who are pushing the boundaries are not making chat apps for gpt4.

They are privately continuing research, like they always were.

[–] [email protected] 6 points 5 months ago (1 children)

But they’re also having to fight for more limited funding among a crowd of chatbot “researchers”. The funding agencies are enamored with LLMs right now.

[–] [email protected] 2 points 5 months ago

In my experience that's not the case. These teams are not very public but are very well funded.

[–] [email protected] 1 points 5 months ago

Thanks, Buster. It's reassuring to hear that.

[–] bulwark 11 points 5 months ago (1 children)

I wouldn't say LLMs are going away any time soon. 3 or 4 years ago I did the Sentdex youtube tutorial to build one from scratch to beat a flappy bird game. They are really impressive when you look at the underlying math. And the math isn't precise enough to be reliable for anything more than entertainment. Claiming it's AI, much less AGI is just marketing bullshit, tho.

[–] [email protected] -3 points 5 months ago (1 children)

You're saying you think LLMs are not AI?

[–] bulwark 6 points 5 months ago (2 children)

I'm not sure what is these days but according to Merriam it's the capability of computer systems or algorithms to imitate intelligent human behavior. So it's debatable.

[–] [email protected] 1 points 5 months ago

I don't think it's just marketing bullshit to think of LLMs as AI... The research community generally does, too. Like the AI section on arxiv is usually where you find LLM papers, for example.

That's not like a crazy hype claim like the "AGI" thing, either... It doesn't suggest sentience or consciousness or any particular semblance of life (and I'd disagree with MW that it needs to be "human" in any way)... It's just a technical term for systems that exhibit behaviors based on training data rather than explicit programming.

[–] [email protected] 1 points 5 months ago

Basically, whenever we find that a human ability can be automated, the goalposts of the "AI" buzzword are silently moved to include it.

[–] [email protected] 11 points 5 months ago (1 children)
[–] bappity 2 points 5 months ago

AGI coming tomorrow! (tomorrow never comes)

[–] halcyoncmdr 7 points 5 months ago

AGI is the new Nuclear Fusion. It will always be 30 years away.

[–] [email protected] 2 points 5 months ago

All they had to do was make BonzaiBuddy link up with ChatGPT

[–] EliteDragonX 18 points 5 months ago (1 children)

Tbh i think it’s a real possibility that OpenAI knows they can’t meet people’s expectations with GPT-5 , so they’re posting articles like this, and basically trying to throw out anything they can and see what sticks.

I think if GPT-5 doesn’t pan out, it’s time to accept that things have slowed down, and that the hype cycle is over. This very well could mean another AI winter

[–] [email protected] 11 points 5 months ago

We can only hope

[–] tdawg 17 points 5 months ago (1 children)
[–] [email protected] 5 points 5 months ago (3 children)

For what? I have zero use for any AI products

[–] [email protected] 17 points 5 months ago* (last edited 5 months ago)

My two use cases are project brainstorming and boilerplate code, which saves a lot of time for me. For example sometimes I find an interesting paper and want to try it out in Python. If they did not provide code that will take some time and trial and error to get it running. Or I just copy the whole paper into ChatGPT and get an initial script that sometimes even works with it's first try. But that is not the point, I can do the last steps myself, it really is a time saver for me with regards to programming.

[–] [email protected] 17 points 5 months ago (1 children)

It's really useful for programming. It's not always right but it has good approaches and you can ask it to write tedious parts of your code like long switch statements. Most of my programming problems were solved because I just explained the problem like Rubber Duck Debugging.

[–] [email protected] 2 points 5 months ago (1 children)

I use it for programming questions.

  • immediate replies so I don't have to switch tasks while praying for an answer

  • no suggestions that I just do the whole thing differently

  • infinite patience

[–] Passerby6497 3 points 5 months ago (1 children)

Don't forget the other benefits of using AI for programming:

  • It may make up shit that doesn't exist or just give you wrong syntax

  • It will give you the same wrong answer repeatedly until you get irritated and it hangs up on you

  • Is way too goddamned excited while giving you shit answers until you run out of patience

I like using it for help, but goddamn do I want to throw my laptop out the window some days.

[–] [email protected] 2 points 5 months ago

💯. Although sometimes I feel like berating the AI is more satisfying; it's all his fault I haven't solved this yet!

[–] [email protected] 5 points 5 months ago

I'd be shorting the hell out of OpenAI and Nvidia if I had a good feel for the timeline. Who knows how long it'll take for the bubble to actually pop.