Fuck Cars
A place to discuss problems of car centric infrastructure or how it hurts us all. Let's explore the bad world of Cars!
Rules
1. Be Civil
You may not agree on ideas, but please do not be needlessly rude or insulting to other people in this community.
2. No hate speech
Don't discriminate or disparage people on the basis of sex, gender, race, ethnicity, nationality, religion, or sexuality.
3. Don't harass people
Don't follow people you disagree with into multiple threads or into PMs to insult, disparage, or otherwise attack them. And certainly don't doxx any non-public figures.
4. Stay on topic
This community is about cars, their externalities in society, car-dependency, and solutions to these.
5. No reposts
Do not repost content that has already been posted in this community.
Moderator discretion will be used to judge reports with regard to the above rules.
Posting Guidelines
In the absence of a flair system on lemmy yet, let’s try to make it easier to scan through posts by type in here by using tags:
- [meta] for discussions/suggestions about this community itself
- [article] for news articles
- [blog] for any blog-style content
- [video] for video resources
- [academic] for academic studies and sources
- [discussion] for text post questions, rants, and/or discussions
- [meme] for memes
- [image] for any non-meme images
- [misc] for anything that doesn’t fall cleanly into any of the other categories
Recommended communities:
view the rest of the comments
People, and especially journalists, need to get this idea of robots as perfectly logical computer code out of their heads. These aren't Asimov's robots we're dealing with. Journalists still cling to the idea that all computers are hard-coded. You still sometimes see people navel-gazing on self-driving cars, working the trolley problem. "Should a car veer into oncoming traffic to avoid hitting a child crossing the road?" The authors imagine that the creators of these machines hand-code every scenario, like a long series of if statements.
But that's just not how these things are made. They are not programmed; they are trained. In the case of self-driving cars, they are simply given a bunch of video footage and radar records, and the accompanying driver inputs in response to those conditions. Then they try to map the radar and camera inputs to whatever the human drivers did. And they train the AI to do that.
This behavior isn't at all surprising. Self-driving cars, like any similar AI system, are not hard coded, coldly logical machines. They are trained off us, off our responses, and they exhibit all of the mistakes and errors we make. The reason waymo cars don't stop at crosswalks is because human drivers don't stop at crosswalks. The machine is simply copying us.
Training self driving cars that way would be irresponsible, because it would behave unpredictably and could be really dangerous. In reality, self driving cars use AI for only some tasks for which it is really good at like object recognition (e.g. recognizing traffic signs, pedestrians and other vehicles). The car uses all this data to build a map of its surroundings and tries to predict what the other participants are going to do. Then, it decides whether it's safe to move the vehicle, and the path it should take. All these things can be done algorithmically, AI is only necessary for object recognition.
In cases such as this, just follow the money to find the incentives. Waymo wants to maximize their profits. This means maximizing how many customers they can serve as well as minimizing driving time to save on gas. How do you do that? Program their cars to be a bit more aggressive: don't stop on yellow, don't stop at crosswalks except to avoid a collision, drive slightly over the speed limit. And of course, lobby the shit out of every politician to pass laws allowing them to get away with breaking these rules.
According to some cursory research (read: Google), obstacle avoidance uses ML to identify objects, and uses those identities to predict their behavior. That stage leaves room for the same unpredictability, doesn't it? Say you only have 51% confidence that a "thing" is a pedestrian walking a bike, 49% that it's a bike on the move. The former has right of way and the latter doesn't. Or even 70/30. 90/10.
There's some level where you have to set the confidence threshold to choose a course of action and you'll be subject to some ML-derived unpredictability as confidence fluctuates around it... right?
In such situations, the car should take the safest action and assume it's a pedestrian.
But mechanically that's just moving the confidence threshold to 100% which is not achievable as far as I can tell. It quickly reduces to "all objects are pedestrians" which halts traffic.
This would only be in ambiguous situations when the confidence level of "pedestrian" and "cyclist" are close to each other. If there's an object with 20% confidence level that it's a pedestrian, it's probably not. But we're talking about the situation when you have to decide whether to yield or not, which isn't really safety critical.
The car should avoid any collisions with any object regardless of whether it's a pedestrian, cyclist, cat, box, fallen tree or any other object, moving or not.