this post was submitted on 18 Oct 2024
787 points (98.4% liked)
Technology
60133 readers
2751 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Exactly. The current rate is 80 deaths per day in the US alone. Even if we had self-driving cars proven to be 10 times safer than human drivers, we’d still see 8 news articles a day about people dying because of them. Taking this as 'proof' that they’re not safe is setting an impossible standard and effectively advocating for 30,000 yearly deaths, as if it’s somehow better to be killed by a human than by a robot.
If you get killed by a robot, it simply lacks the human touch.
If you get killed by a robot, you can at least die knowing your death was the logical option and not a result of drunk driving, road rage, poor vehicle maintenance, panic, or any other of the dozens of ways humans are bad at decision-making.
It doesn't even need to be logical, just statistically reasonable. You're literally a statistic anytime you interact w/ any form of AI.
Or the result of cost cutting...
or a flipped comparison operator, or a "//TODO test code please remove"
The problem with this way of thinking is that there are solutions to eliminate accidents even without eliminating self-driving cars. By dismissing the concern you are saying nothing more than it isn't worth exploring the kinds of improvements that will save lives.
"10 times safer than human drivers", (except during specific visually difficult conditions which we knowingly can prevent but won't because it's 10 times safer than human drivers). In software, if we have replicable conditions that cause the program to fail, we fix those, even though the bug probably won't kill anyone.
But they aren't and likely never will be.
And how are we to correct for lack of safety then? With human drivers you obvious discourage dangerous driving through punishment. Who do you punish in a self driving car?