this post was submitted on 02 Nov 2024
135 points (93.5% liked)

Technology

1443 readers
1372 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

[email protected]
[email protected]


Icon attribution | Banner attribution

founded 1 year ago
MODERATORS
 

The probe hones in on one of Tesla's most eyebrow-raising decisions when it comes to its driver assistance package: the insistence on exclusively relying on camera sensors instead of LiDAR and radar like its competitors, which CEO Elon Musk has long derided as a "crutch."

In 2022, the company went all-in on cameras, ditching ultrasonic sensors in its vehicles altogether — a decision that could prove to be a major mistake as it struggles to catch up with its competition and has now promised robust self-driving capabilities to owners who may lack the necessary sensor hardware.

you are viewing a single comment's thread
view the rest of the comments
[–] DarkSurferZA 36 points 3 weeks ago (2 children)

This is one of the comments that Elon Musk uses a lot when he says humans drive with their eyes, but its untrue. We actually have a wide array of sensory systems that help us drive. Firstly, we use our ears, eyes and body motion to drive. Secondly, unlike a fixed camera mounted on a car, our heads are in constant motion. This means that we cover blind spots better than a fixed camera, and we are able to determine if it's a small deer really close by, and a large deer really far away. Our brains take multiple 3d images and stitch them together to determine size, distance and speed.

The best way to explain the driving using your eyes fallacy is basically to look at fpv RC cars, and see how much sensory information you have been robbed of while trying to pilot the vehicle

[–] [email protected] 6 points 3 weeks ago (1 children)

Not only are our heads in constant motion. Our eyes are also always in motion. We’re constantly, quickly and accurately shifting our attention to different points in our vision.

[–] [email protected] 5 points 3 weeks ago (2 children)

That's mostly accounting for the resolution and motion sensitivity in different parts of the eye. With enough cameras a car should be able too "see" more than we could at any one time.

[–] DarkSurferZA 7 points 3 weeks ago

No, not really true.

The way AI systems have been implemented in cars produces a flat image which we run through some fancy AI and the arrive at a conclusion. But what if 1 camera sees a child and for whatever reason, the other sees a clear road? The AI is not trained to process vision the way we do, where we use all our various senses including the conflicting info we get from each eye to arrive at a conclusion. It just does a merge and then process. It should process from each sensor, then reprocess to arrive at a conclusion

[–] [email protected] 5 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

To some extent you are correct, but also notice that the cameras in teslas are not installed in pairs, so they don't have depth perception. And since they don't have lidar or radar it doesn't have alternate methods to measure depth and distance.

[–] NotMyOldRedditName 2 points 3 weeks ago* (last edited 3 weeks ago)

The cameras have overlaps which can be used to measure depth and distance.

There are multiple front cameras

The side pillar camera has overlap with the side rear facing

The 2 side rear facing each have overlap with the rear.

Edit: I imagine their weakest depth/ distance perception with the current set up would be their side pillar cameras. But they could also probably do some calculations with how fast it passes from front to rear.