this post was submitted on 22 Feb 2024
488 points (96.2% liked)

Technology

60016 readers
2696 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

you are viewing a single comment's thread
view the rest of the comments
[–] FooBarrington -1 points 10 months ago* (last edited 10 months ago) (44 children)

I'll get the usual downvotes for this, but:

Because the AI doesn't know anything.

is untrue, because current AI fundamentally is knowledge. Intelligence fundamentally is compression, and that's what the training process does - it compresses large amounts of data into a smaller size (and of course loses many details in the process).

But there's no way to argue that AI doesn't know anything if you look at its ability to recreate a great number of facts etc. from a small amount of activations. Yes, not everything is accurate, and it might never be perfect. I'm not trying to argue that "it will necessarily get better". But there's no argument that labels current AI technology as "not understanding" without resorting to a "special human sauce" argument, because the fundamental compression mechanisms behind it are the same as behind our intelligence.

Edit: yeah, this went about as expected. I don't know why the Lemmy community has so many weird opinions on AI topics.

[–] [email protected] 3 points 10 months ago* (last edited 10 months ago) (8 children)

Would it be accurate so say that while current AI does have the knowledge, it lacks the reasoning skills needed to apply the knowledge correctly?

[–] FooBarrington -3 points 10 months ago (6 children)

I don't think it's generally true, because current AI can solve some reasoning tasks very well. But it's definitely something where they are lacking.

[–] rambaroo 3 points 10 months ago* (last edited 10 months ago) (1 children)

It isn't reasoning about anything. A human did the reasoning at some point, and the LLM's dataset includes that original information. The LLM is simply matching your prompt to that training data. It's not doing anything else. It's not thinking about the question you asked it. It's a glorified keyword search.

It's obvious you have no idea how LLMs work at a fundamental level, yet you keep talking about them like you're an expert.

load more comments (4 replies)
load more comments (5 replies)
load more comments (40 replies)