this post was submitted on 28 Oct 2023
166 points (97.7% liked)
Technology
60082 readers
3332 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
But is this actually true? I hate that they just printed this without any attempt to verify it. Surely some independent body has looked into this by now.
It's not true, this is not the first time Cruise has been caught lying, and at some point an adult needs to step up and tell them to stop putting people in danger.
Even Waymo has commented on the past about Cruise playing fast and loose with the definitions of things that needed to be reported.
Waymo cars seem to operate much more sensibly than Cruise ones from what I've watched and read... although IMO that is mainly down to the car calling it quits much sooner and asking for an operator to take control, and driving in a different environment in general.
Cruise on the other hand seems to just carry on anyway, unless its lidar is blocked 😳
Yeah, I mean, some food for thought here is that Waymo started out as a research project and has been doing this since 2009 and they're ultra conservative with their behaviors. Before starting in 2009, the beginnings of the team were recruited from DARPA Grand Challenge participants. And even they have major mishaps.
Cruise, on the other hand, started out trying to sell retrofit hardware right away. Then tried convincing people they could do city driving right away. Now GM has revenue targets for them, like any adult business would, and they have no hope of ever accomplishing them. So, they're back to their old tricks, cutting down the number of miles driven for training models, rushing vehicles into service with no monitoring operators in them, deceiving investors and regulators about remote operations.
One is a slow, methodical money furnace that attempts to solve the larger problem set. The other is a fast moving money furnace that tries to get people to pay them for half measures.
Damn, Waymo has been around for that long? TIL
Waymo's progress is probably a good indicator as to how far along we are with self driving cars IMO. Given that Waymo has their cars pretty thoroughly trained on set routes (well, even us humans need to learn or try various routes before we're fully confident on them sometimes), Cruise cheaping out on the whole training process is only going to accelerate their demise... especially when it's at the expense of pedestrians' safety
If you really want your mind, blown the first autonomous vehicle to drive coast to coast in the US happened in 1989. A vehicle from Carnegie millen University called NavLab. It used lidar, cameras, radar, and ultrasonics. Literally the same stuff we're using today.
Pretty sure it IS 1v1 factually true, but the real question is, "why?". Is it because everyone is weary around a car with a huge-ass camera and sensor system on top that doesn't have a driver? Or because the system is good?
I am sure it is true in at least some sense because they would be called on an outright lie but there are many ways you can deceive with true numbers. And I don’t trust them to be fully honest.
But if it is accurate I’d like to see an independent analysis rather than the company’s spin on it.
Oh definitely. It could be to no quality of the cars themselves if their driving record is from everyone else avoiding them on the road.
From their privacy policy:
https://getcruise.com/legal/us/privacy-policy/
Driverless cars are certainly less error-prone overall than human operated ones. Distraction, sleepiness, intoxication, hubris, and other common "human error" causes of accidents are eliminated. Now we're seeing, though, that human beings - even pretty average ones - are still able to make better judgments in unique situations.
Because the recent incidents have been so laughably stupid from a human perspective, the instinct is to doubt the accuracy of driverless cars in all situations. The robots are able to do the comparatively simple things extremely well. It's just the more complex things they still have trouble with - so far. They're still safer than human operators, and will only continue to get better.
Humans make the same mistakes though. Backup cameras were added to cars because humans kept running over people, especially kids. People block emergency vehicles all the time.
Yes, the automation will always have room for improvement, but the current 'newsworthy' incidents are rarely in the news when humans do the exact same thing.
I would be pretty confused as well if someone ran up to my car and stuck a traffic cone on it.
Would you sit stopped in traffic for 20 minutes looking dumbfounded?
I would if I couldn't get out of the car and remove it.
I suspect there is something more to this than just that. After all, the car in question did this:
It seems like there are unsolvable safety problems going on.
Yes, the car does not appear to have safety features that let it know a body is caught underneath, but it did try to get out of traffic after the collision.
Since this never happens to human drivers that means autonomous cars are unfeasible.
Or it is an opportunity to add some additional sensors underneath that will make it miles better than human drivers.
Really the main problem with autonomous cars at this point in time is a combination of the co panes hiding issues and the public expecting perfection. More transparency and a 3rd party comparison to human drivers would be the best way to both improve automation and gain public trust when they actually see how bad human drivers can be.
Also charge corporations for betatesting on the fucking public... they're using tax payer funded roads and putting our lives at risk for their profits. They should share those profits far, far more than they do.
You would think a self driving car could have 360 degrees of vision and not run into things, whether it's a firetruck or a cardboard box or a person. That should be job 1 for self driving.
It would be nice if we had a Ralph Nader for AI driving.
He did a lot for safety decades ago. Feels like we need similar now.