this post was submitted on 19 Oct 2023
540 points (96.6% liked)

Technology

55648 readers
3765 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Black Mirror creator unafraid of AI because it’s “boring”::Charlie Brooker doesn’t think AI is taking his job any time soon because it only produces trash

you are viewing a single comment's thread
view the rest of the comments
[–] Telodzrum 50 points 8 months ago (35 children)

Nah, nah to all of it. LLM is a parlor trick and not a very good one. If we are ever able to make a general artificial intelligence, that's an entirely different story. But text prediction on steroids doesn't move the needle.

[–] [email protected] 16 points 8 months ago (3 children)

Sam Altman (Creator of the freakish retina scanning based Worldcoin) would agree, it seems. The current path for LLMs and GPT seems to be in something of a bind, because to seriously improve upon what it currently does it needs to do something different, not more of the same. And figuring out something different could be very hard. https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/

At least that's what I understand of it.

[–] [email protected] 12 points 8 months ago* (last edited 8 months ago) (2 children)

He's not saying "AI is done, there's nothing else to do, we've hit the limit", he's saying "bigger models don't necessarily yield better results like we had initially anticipated"

Sam recently went before congress and advocated for limiting model sizes as a means of regulation, because, at the time, he believed bigger would generally always mean better outputs. What we're seeing now is that if a model is too large it will have trouble producing truthful output, which is super important to us humans.

And honestly, I don't think anyone should be shocked by this. Our own human brains have different sections that control different aspects of our lives. Why would an AI brain be different?

[–] [email protected] 1 points 8 months ago

I gather that this is partly because data sizes haven't been going up with model sizes. That is likely to change soon as synthetic data starts to overtake organic data in both quantity and quality.

load more comments (1 replies)
load more comments (1 replies)
load more comments (32 replies)