this post was submitted on 17 May 2024
503 points (94.8% liked)

Technology

58076 readers
4792 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -5 points 4 months ago (4 children)

I want to just understand why people get so passionate about explaining how things work, especially in this field where even the experts themselves just don't understand how it works? It's just an interesting phenomenon to me

[–] Fungah 5 points 4 months ago (1 children)

The not understanding hlw it works thing isn't universal in ai from my understanding. And people understand how a lot of it works even then. There may be a few mysterious but its not sacrificing chickens to Jupiter either.

[–] [email protected] -1 points 3 months ago

Nope, it's actually not understood. Sorry to hear you don't understand that

[–] mriormro 4 points 4 months ago (1 children)

What exactly are your bona fides that you get to play the part of the exasperated "expert" here? And, more importantly, why should I give a fuck?

I constantly hear this shit from other self-appointed experts in this field as if no one is allowed to discuss, criticize, or form opinions on the implications of this technology besides those few who 'truly understand'.

[–] [email protected] -1 points 3 months ago

Did you misread something? Nothing of what you said is relevant

[–] [email protected] 3 points 4 months ago (2 children)

Seems like there are a lot of half baked ideas online about AI that seem to come from assumptions based on some sci-fi ideal or something. People are shocked that an artificial intelligence gets things wrong when they themselves have probably made a handful of incorrect assumptions today. This Tom Scott talk is a great explanation of how truth can never be programmed into anything. And will never really be obtainable to humanity in the foreseeable future.

[–] [email protected] 1 points 4 months ago

Here is an alternative Piped link(s):

Tom Scott talk

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] [email protected] 1 points 3 months ago

Yeah! That's probably a good portion of it, but exasterbated by the general hate for ai, which is understandable due to the conglomorates abusive training data

[–] ClamDrinker 1 points 4 months ago* (last edited 4 months ago) (1 children)

Hallucinations in AI are fairly well understood as far as I'm aware. Explained in high level on the Wikipedia page for it. And I'm honestly not making any objective assessment of the technology itself. I'm making a deduction based on the laws of nature and biological facts about real life neural networks. (I do say AI is driven by the data it's given, but that's something even a layman might know)

How to mitigate hallucinations is definitely something the experts are actively discussing and have limited success in doing so (and I certainly don't have an answer there either), but a true fix should be impossible.

I can't exactly say why I'm passionate about it. In part I want people to be informed about what AI is and is not, because knowledge about the technology allows us to make more informed decision about the place AI takes in our society. But I'm also passionate about human psychology and creativity, and what we can learn about ourselves from the quirks we see in these technologies.

[–] [email protected] -1 points 3 months ago

Not really, no, because these aren't biological, and the scientists that work with it is more interested in understanding why it works at all.

It is very interesting how the brain works, and our sensory processing is predictive in nature, but no, it's not relevant to machine learning which works completely different