this post was submitted on 02 Mar 2025
159 points (90.4% liked)
Technology
63631 readers
4296 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
How about: there's no difference between actually free will and an infinite universe of infinite variables affecting your programming, resulting in a belief that you have free will. Heck, a couple million variables is more than plenty to confuddle these primate brains.
As a kid learning about programming, I told my mom that I thought the brain was just a series of if ; then statements.
I didn't know about switch statements then.
Ok, but then you run into why does billions of vairables create free will in a human but not a computer? Does it create free will in a pig? A slug? A bacterium?
Because billions is an absurd understatement, and computer have constrained problem spaces far less complex than even the most controlled life of a lab rat.
And who the hell argues the animals don't have free will? They don't have full sapience, but they absolutely have will.
So where does it end? Slugs, mites, krill, bacteria, viruses? How do you draw a line that says free will this side of the line, just mechanics and random chance this side of the line?
I just dont find it a particularly useful concept.
I'd say it ends when you can't predict with 100% accuracy 100% of the time how an entity will react to a given stimuli. With current LLMs if I run it with the same input it will always do the same thing. And I mean really the same input not putting the same prompt into chat GPT twice and getting different results because there's an additional random number generator I don't have access too.
So if I modify an LLM to have true randomness embedded within it (e.g. using a true random number generator based on radioactive decay ) does that then have free will?
Why don't they have free will?
If viruses have free will when they are machines made out of rna which just inject code into other cells to make copies of themselves then the concept is meaningless (and also applies to computer programs far simpler than llms).