Technology
Which posts fit here?
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
Rules
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
Companion communities
[email protected]
[email protected]
Icon attribution | Banner attribution
view the rest of the comments
i get that… but… vision is kinda shit. why not use all the tools at your disposal? like literally “x ray vision” is something that we see as a super power because it’d be so useful - radar gives us that
vision is an approximation of things like lidar. can you get a depth map out of vision? sure by why not just measure it directly and then you’re not introducing error by your model literally hallucinating
kinda but also the last 20% takes 80% of the effort… solving a lot of easy problems with more information will lead to a better short term outcome, and then when you’re getting good results then you can solve from 80% to 85% then 85 to 90 etc across your whole sensor suite
so they though? you can buy hobbyist ultrasonic sensors for literally a couple of bucks, lidar for a few hundred - sure that’s not at the grade that you’d use for cars, but at some point it’s an economies of scale problem. they’re not actually that expensive for a commodity “good enough” sensor package
correct - i understand them, but as an engineer it’s just wrong when you’re talking about one of the most dangerous activities that humanity collectively engages in (driving)
i think this could be the sticking point - i don’t think any extra sensors are needed, just like i don’t think seatbelts or air bags etc are needed… but… they’re helpful and improve the safety of people in and around the car
agree, and i totally think driverless is the way to go - humans are far worse drivers than machines are right now without any improvement
… however, better isn’t perfect, and when it comes to safety simply ignoring tools because of some belief that eventually it’ll be fine is misguided at best, and negligent at worst
absolutely that too! their systems aren’t “drives itself no problemo” and that’s how they’re marketing it
I don't really have much more to add, but just wanted to say I appreciated the conversation we had. This topic can often devolve and it's nice that it didn't.