this post was submitted on 31 Dec 2024
544 points (96.9% liked)

Fuck Cars

9866 readers
940 users here now

A place to discuss problems of car centric infrastructure or how it hurts us all. Let's explore the bad world of Cars!

Rules

1. Be CivilYou may not agree on ideas, but please do not be needlessly rude or insulting to other people in this community.

2. No hate speechDon't discriminate or disparage people on the basis of sex, gender, race, ethnicity, nationality, religion, or sexuality.

3. Don't harass peopleDon't follow people you disagree with into multiple threads or into PMs to insult, disparage, or otherwise attack them. And certainly don't doxx any non-public figures.

4. Stay on topicThis community is about cars, their externalities in society, car-dependency, and solutions to these.

5. No repostsDo not repost content that has already been posted in this community.

Moderator discretion will be used to judge reports with regard to the above rules.

Posting Guidelines

In the absence of a flair system on lemmy yet, let’s try to make it easier to scan through posts by type in here by using tags:

Recommended communities:

founded 2 years ago
MODERATORS
 

Wapo journalist verifies that robotaxis fail to stop for pedestrians in marked crosswalk 7 out of 10 times. Waymo admitted that it follows "social norms" rather than laws.

The reason is likely to compete with Uber, 🤦

Wapo article: https://www.washingtonpost.com/technology/2024/12/30/waymo-pedestrians-robotaxi-crosswalks/

Cross-posted from: https://mastodon.uno/users/rivoluzioneurbanamobilita/statuses/113746178244368036

you are viewing a single comment's thread
view the rest of the comments
[–] WoodScientist 86 points 6 days ago (24 children)

People, and especially journalists, need to get this idea of robots as perfectly logical computer code out of their heads. These aren't Asimov's robots we're dealing with. Journalists still cling to the idea that all computers are hard-coded. You still sometimes see people navel-gazing on self-driving cars, working the trolley problem. "Should a car veer into oncoming traffic to avoid hitting a child crossing the road?" The authors imagine that the creators of these machines hand-code every scenario, like a long series of if statements.

But that's just not how these things are made. They are not programmed; they are trained. In the case of self-driving cars, they are simply given a bunch of video footage and radar records, and the accompanying driver inputs in response to those conditions. Then they try to map the radar and camera inputs to whatever the human drivers did. And they train the AI to do that.

This behavior isn't at all surprising. Self-driving cars, like any similar AI system, are not hard coded, coldly logical machines. They are trained off us, off our responses, and they exhibit all of the mistakes and errors we make. The reason waymo cars don't stop at crosswalks is because human drivers don't stop at crosswalks. The machine is simply copying us.

[–] [email protected] 17 points 6 days ago* (last edited 6 days ago) (10 children)

I think the reason non-tech people find this so difficult to comprehend is the poor understanding of what problems are easy for (classically programmed) computers to solve versus ones that are hard.

if ( person_at_crossing ) then { stop }

To the layperson it makes sense that self-driving cars should be programmed this way. Aftter all, this is a trivial problem for a human to solve. Just look, and if there is a person you stop. Easy peasy.

But for a computer, how do you know? What is a 'person'? What is a 'crossing'? How do we know if the person is 'at/on' the crossing as opposed to simply near it or passing by?

To me it's this disconnect between the common understanding of computer capability and the reality that causes the misconception.

[–] Starbuck 9 points 6 days ago

I think you could liken it to training a young driver who doesn’t share a language with you. You can demonstrate the behavior you want once or twice, but unless all of the observations demonstrate the behavior you want, you can’t say “yes, we specifically told it to do that”

load more comments (9 replies)
load more comments (22 replies)