this is not on Waymo. it's on the absolute sold out pieces of shit that allow Waymo and other cunts like Elon to experiment with human lives for money.
Fuck Cars
A place to discuss problems of car centric infrastructure or how it hurts us all. Let's explore the bad world of Cars!
Rules
1. Be Civil
You may not agree on ideas, but please do not be needlessly rude or insulting to other people in this community.
2. No hate speech
Don't discriminate or disparage people on the basis of sex, gender, race, ethnicity, nationality, religion, or sexuality.
3. Don't harass people
Don't follow people you disagree with into multiple threads or into PMs to insult, disparage, or otherwise attack them. And certainly don't doxx any non-public figures.
4. Stay on topic
This community is about cars, their externalities in society, car-dependency, and solutions to these.
5. No reposts
Do not repost content that has already been posted in this community.
Moderator discretion will be used to judge reports with regard to the above rules.
Posting Guidelines
In the absence of a flair system on lemmy yet, let’s try to make it easier to scan through posts by type in here by using tags:
- [meta] for discussions/suggestions about this community itself
- [article] for news articles
- [blog] for any blog-style content
- [video] for video resources
- [academic] for academic studies and sources
- [discussion] for text post questions, rants, and/or discussions
- [meme] for memes
- [image] for any non-meme images
- [misc] for anything that doesn’t fall cleanly into any of the other categories
Recommended communities:
Looks like the revisionist history podcast might need to revise thier episode about waymo... 😅
People, and especially journalists, need to get this idea of robots as perfectly logical computer code out of their heads. These aren't Asimov's robots we're dealing with. Journalists still cling to the idea that all computers are hard-coded. You still sometimes see people navel-gazing on self-driving cars, working the trolley problem. "Should a car veer into oncoming traffic to avoid hitting a child crossing the road?" The authors imagine that the creators of these machines hand-code every scenario, like a long series of if statements.
But that's just not how these things are made. They are not programmed; they are trained. In the case of self-driving cars, they are simply given a bunch of video footage and radar records, and the accompanying driver inputs in response to those conditions. Then they try to map the radar and camera inputs to whatever the human drivers did. And they train the AI to do that.
This behavior isn't at all surprising. Self-driving cars, like any similar AI system, are not hard coded, coldly logical machines. They are trained off us, off our responses, and they exhibit all of the mistakes and errors we make. The reason waymo cars don't stop at crosswalks is because human drivers don't stop at crosswalks. The machine is simply copying us.
All of which takes you back to the headline, "Waymo trains its cars to not stop at crosswalks". The company controls the input, it needs to be responsible for the results.
Some of these self driving car companies have successfully lobbied to stop citys from ticketing their vehicles for traffic infractions. Here they are stating these cars are so much better than human drivers, yet they won't stand behind that statement instead they are demanding special rules for themselves and no consequences.
The machine can still be trained to actually stop at crosswalks the same way it is trained to not collide with other cars even though people do that.
I think the reason non-tech people find this so difficult to comprehend is the poor understanding of what problems are easy for (classically programmed) computers to solve versus ones that are hard.
if ( person_at_crossing ) then { stop }
To the layperson it makes sense that self-driving cars should be programmed this way. Aftter all, this is a trivial problem for a human to solve. Just look, and if there is a person you stop. Easy peasy.
But for a computer, how do you know? What is a 'person'? What is a 'crossing'? How do we know if the person is 'at/on' the crossing as opposed to simply near it or passing by?
To me it's this disconnect between the common understanding of computer capability and the reality that causes the misconception.
I think you could liken it to training a young driver who doesn’t share a language with you. You can demonstrate the behavior you want once or twice, but unless all of the observations demonstrate the behavior you want, you can’t say “yes, we specifically told it to do that”
Whether you call in it programming or training, the designers still designed a car that doesn't obey traffic laws.
People need to get it out of their heads that AI is some kind of magical monkey-see-monkey-do. AI isn't magic, it's just a statistical model. Garbage in = Garbage out. If the machine fails because it's only copying us, that's not the machine's fault, not AI's fault, not our fault, it's the programmer's fault. It's fundamentally no different, had they designed a complicated set of logical rules to follow. Training a statistical model is programming.
You're whole "explanation" sounds like a tech-bro capitalist news conference sound bite released by a corporation to avoid guilt for running down a child in a crosswalk.
It's not apologeia. It's illustrating the foundational limits of the technology. And it's why I'm skeptical of most machine learning systems. You're right that it's a statistical model. But what people miss is that these models are black boxes. That is the crucial distinction between programming and training that I'm trying to get at. Imagine being handed a 10 million x 10 million matrix of real numbers and being told, "here change this so it always stops at crosswalks." It isn't just some line of code that can be edited.
The distinction between training and programming is absolutely critical here. You cannot hand waive away that distinction. These models are trained like we train animals. They aren't taught through hard coded rules.
And that is a fundamental limit of the technology. We don't know how to program a computer how to drive a car. Instead we only know how to make a computer mimic human driving behavior. And that means the computer can ultimately never peform better than an attentive sober human with some increases reaction time and visibility. But if there is any common errors that humans frequently make, then it will be duplicated in the machine.
I'm sure a strong legal case can be made here.
An individual driver breaking the law is bad enough but the legal system can be "flexible" because it's hard to enforce the law against a generalized (bad) social norm and then each individual law breaker can argue an individual case etc.
But a company systematically breaking the law on purpose is different. Scale here matters. There are no individualized circumstances and no crying at a judge that the fine will put this single mother in a position to not pay rent this month. This is systematic and premeditated. Inexcusable in every way.
Like, a single cook forgetting to wash hands once after going to the bathroom is gross but a franchise chain building a business model around adding small quantities of poop in food is insupportable.
I really want to agree, but conservative Florida ruled that people don't have the right to clean water so I doubt the conservative Supreme Court will think we have the right to safe crosswalks
I am not intimately familiar with your country's legal conventions, but there is already a law (pedestrians having priority in crosswalks) that is being broken here, right?
Driving laws are broken by humam drivers every day. The speed limit is secretly +15, rolling stops on stop signs is standard, many treat a right turn on red as a yield instead. Its so common and normalized that actually enforcing all the driving laws now would take a massive increase in the amount of police doing traffic control on the road assissted with cameras throughout the city to help with speeding and running red lights.
The truth is, North America has no interest in making their roads safer, you can see that in the way they design them. Vehicle speed and throughput above all else. North America has had increasing pedestrian deaths over the last several years, the rest of the developed world has decreasing pedestrian deaths.
Sure. But this is different. This is similar to Amazon putting down in black and white as policy that delivery drivers must ignore stop signs.
I remember seeing a video from inside a waymo waiting to make a left against traffic.
It turned the wheel before moving, in anticipation of the turn. Which is normal for most drivers I see on the road.
It's also the exact opposite of what you should do for safety and legality.
Keep the wheel straight until you're ready to move, turning the wheel before the turn means that if someone rear ends you, you get pushed into traffic, not along your current lane.
It's the social norm, not the proper move.
A left against traffic? Left turns don't go against traffic. That's right turns.
I was involved in a crash many years ago where this resulted in the car in front of us getting pushed into an oncoming car. We were stopped behind a car indicating to turn, hit from behind by a bus (going quite fast), pushed hard into the car in front and they ended up getting smashed from behind and in front.
Don't turn your wheel until you're ready to move, folks.
The recent Not Just Bike video about self driving cars is really good about this subject, very dystopic
And again... If I break the law, I get a large fine or go to jail. If companies break the law, they at worst will get a small fine
Why does this disconnect exist?
Am I so crazy to demand that companies are not only treated the same, but held to a higher standard? I don't stop ar a zebra, that is me breaking the law once. Waymo programming their cars noy to do that is multiple violations per day, every day. Its a company deciding they're above the law because they want more money. Its a company deciding to risk the lives of others to earn more money.
For me, all managers and engineers that signed off on this and worked on this should he jailed, the company should be restricted from doing business for a month, and required to immediately ensure all laws are followed or else...
This is the only way we get companies to follow the rules.
Instead though, we just ask compi to treat laws as suggestions, sometimes requiring small payments if they cross the line too far.
Do you have an example of a company getting a smaller fine than an individual for the same crime? Generally company fines are much larger.
Why does this disconnect exist?
Because the companies pay the people who make the law.
Stating the obvious here but it's the sad truth
Funny that you don't mention company owners or directors who are supposed to oversee what happens, in practice are the people putting pressure to make that happen, and are the ones liable in front of the law.
I thought that was obviously implied.
If the CEO signed off on whatever is illegal, jail him or her too.
I work in a related field to this, so I can try to guess at what's happening behind the scenes. Initially, most companies had very complicated non-machine learning algorithms (rule-based/hand-engineered) that solved the motion planning problem, i.e. how should a car move given its surroundings and its goal. This essentially means writing what is comparable to either a bunch of if-else statements, or a sort of weighted graph search (there are other ways, of course). This works well for say 95% of cases, but becomes exponentially harder to make work for the remaining 5% of cases (think drunk driver or similar rare or unusual events).
Solving the final 5% was where most turned to machine learning - they were already collecting driving data for training their perception and prediction models, so it's not difficult at all to just repurpose that data for motion planning.
So when you look at the two kinds of approaches, they have quite distinct advantages over each other. Hand engineered algorithms are very good at obeying rules - if you tell it to wait at a crosswalk or obey precedence at a stop sign, it will do that no matter what. They are not, however, great at situations where there is higher uncertainty/ambiguity. For example, a pedestrian starts crossing the road outside a crosswalk and waits at the median to allow you to pass before continuing on - it's quite difficult to come up with a one size fits all rule to cover these kinds of situations. Driving is a highly interactive behaviour (lane changes, yielding to pedestrians etc), and rule based methods don't do so well with this because there is little structure to this problem. Some machine learning based methods on the other hand are quite good at handling these kinds of uncertain situations, and Waymo has invested heavily in building these up. I'm guessing they're trained with a mixture of human-data + self-play (imitation learning and reinforcement learning), so they may learn some odd/undesirable behaviors. The problem with machine learning models is that they are ultimately a strong heuristic that cannot be trusted to produce a 100% correct answer.
I'm guessing that the way Waymo trains its motion planning model/bias in the data allows it to find some sort of exploit that makes it drive through crosswalks. Usually this kind of thing is solved by creating a hybrid system - a machine learning system underneath, with a rule based system on top as a guard rail.
Some references:
(Apologies for the very long comment, probably the longest one I've ever left)
Thanks for taking the time to comment! It's really informative
Move Fast and Break Things
Yeah it increasingly feels like the "things" is just "people" in whatever context
It's only people if the insurance premium goes up when you break 'em.
Only for hitting gold member insurance or above. And our platinum members automatically get absolute priority in traffic. Every autonomous vehicle will yield and let you through like Mozes through the red sea, so call now for that upgrade.
It's interesting how waymos get more article against them compared to tesla.
There is a targeted campaign against waymo.
How can i not think the journalist is in bad faith, when he complain that the waymo doesn't stop... in case he run under another car ?
As an european, when I see this video, the problem isn't the automated cars, but the fact the car are allowed to go this fast on a lane without a traffic light to protect the pedestrian.
Edit:
Waymo admitted that it follows “social norms” rather than laws.
The reason is likely to compete with Uber, 🤦
Because they slowed down too much the traffic and have a campaign against them, about how they slowed too much the traffic, for respecting the law.
Waymo is running driverless (or at least remote monitored) taxis all over SF. that's why they're getting headlines, they're out and being used at scale.
"The squeaky hinge gets the grease"
"The nail that sticks out farthest vets the hammer first"
These are metaphors to say "since Waymo is the one doing things like driverless taxis all over a city, they're getting news stories and social media posts"
Yes, things like Tesla suck too. But tesla isn't operating a "driverless taxi" service. Yet.
I'm sure that as something advertised as "driverless" that tesla's owner gets pissy about it and probably feeds into negative press against them, but that doesn't excuse what they do.
The reason is likely to compete with Uber, 🤦
A few points of clarity, as I have a family member who's pretty high up at waymo. First, they don't want to compete with uber. Waymo isn't really concerned with driverless cars that you or I would be owning/using, and they don't want (at this point anyway) to try to start a new taxi service. Right now you order an uber and a waymo car might show up. . They want the commercial side of the equation. How much would uber pay to not have to pay drivers? How much would a shipping company fork over when they can jettison the $75k-150 drivers?
Second, I know for a fact that the upper management was pushing for the cars to drive like this. I can nearly quote said family member opining that if the cars followed all the rules of the road, they wouldn't perform well, couching it in the language of 'efficiency.' It was something like, "being polite creates confusion in other drivers. They expect you to roll through the stop sign or turn right ahead of them even if they have right of way." So now the waymo cars do the same thing. Yay, "social norms."
A third point is that, as someone else mentioned, the cars are now trained, not 'programmed' with instructions to follow. Said family member spoke of when they switched to the machine learning model, and it was better than the highly complicated (and I'm dumbing down my description because I can't describe it well) series of if-else statements. With that training comes the issue of the folks in charge of things not knowing exactly what is going on. An issue that was described to me was their cars driving right at the edge of the lane, rather than in the center of it, and they couldn't figure out why or (at that point, anyway) how to fix it.
As an addendum to that third point, the training data is us, quite literally. They get and/or purchase people's driving. I think at one time it was actual video, not sure now. So if 90% of drivers blast through at the moment of the red light change if they can, it's likely you'll hear about it eventually from waymo. It's a weakness that ties right into that 'social norm' thing. We're not really training safer driving by having machine drivers, we're just removing some of the human factors like fatigue or attention deficits. Again, as I get frustrated with the language of said family member (and I'm paraphrasing), 'how much do we really want to focus on low percentage occurrences? Improving the 'miles per collision' is best at the big things.'
Then maybe they should make sure to train them with footage and/or data of drivers who are following the traffic laws instead of just whatever drivers they happen to have data from.
Do they review all this training data to make sure data from people driving recklessly is not being included? If so, how? What process do they use to do that?
Being an Alphabet subsidiary I wouldn’t expect anything less, really.
How do you admit to intentionally ignoring traffic laws and not get instantly shutdown by the NTSB?
The “social norms” line is likely because it was trained using actual driver data. And actual drivers will fail to stop. If it happens enough times in the training data and the system is tuned to favor getting from A to B quickly, then it will inevitably go “well it’s okay to blow through crosswalks sometimes, so why not most of the time instead? It saves me time getting from A to B, which is the primary goal.”
Speaking as someone who lives and walks in sf daily, they're still more courteous to pedestrians then drivers and I'd be happy if they replaced human drivers in the city. I'd be happier if we got rid of all the cars but I'll take getting rid of the psychopaths blowing through intersections.
Pedestrians have had it too easy long enough. If elected President I will remove the sidewalks and install moats filled with alligators and sharks with loose 2x4s to cross them. Trained snipers will be watching every crosswalk so if you want a shot at making it remember to serpentine. This is Ford™ country.
Mario needs to set these empty cars on fire.