486
this post was submitted on 17 Aug 2023
486 points (96.0% liked)
Technology
59664 readers
3278 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
At some level, isn't what a human brain does also effectively some form of very very complicated mathematical algorithm, just based not on computer modeling but on the behavior of the physical systems (the neurons in the brain interacting in various ways) involved under the physical laws the universe presents? We don't yet know everything about how the brain works, but we do at least know that it is a physical object that does something with the information given as inputs (senses). Given that we don't know for sure how exactly things like understanding and learning work in humans, can we really be absolutely sure what these machines do doesn't qualify?
To be clear, I'm not really trying to argue that what we have is a true AI or anything, or that what these models do isn't just some very convoluted statistics, I've just had a nagging feeling in the back of my head ever since chatGPT and such started getting popular along the lines of "can we really be sure that this isn't (a very simple form of) what our brains, or at least a part of it, actually do, and we just can't see it that way because that's not how it internally "feels" like?" Or, assuming it is not, if someone made a machine that really did exhibit knowledge and creativity, using the same mechanism as humans or one similar, how would we recognize it, and in what way would it look different from what we have (assuming it's not a sci-fi style artificial general intelligence that's essentially just a person, and instead some hypothetical dumb machine that nevertheless possesses genuine creativity or knowledge.) It feels somewhat strange to declare with certainty that a machine that mimics the symptoms of understanding (in the way that they can talk at least somewhat humanlike, and explain subjects in a manner that sometimes appears thought out. It can also be dead wrong of course but then again, so can humans), definitely does not possess anything close to actual understanding, when we don't even know entirely what understanding physically entails in the first place.