Mildly Infuriating
Home to all things "Mildly Infuriating" Not infuriating, not enraging. Mildly Infuriating. All posts should reflect that.
I want my day mildly ruined, not completely ruined. Please remember to refrain from reposting old content. If you post a post from reddit it is good practice to include a link and credit the OP. I'm not about stealing content!
It's just good to get something in this website for casual viewing whilst refreshing original content is added overtime.
Rules:
1. Be Respectful
Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.
Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.
...
2. No Illegal Content
Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.
That means: -No promoting violence/threats against any individuals
-No CSA content or Revenge Porn
-No sharing private/personal information (Doxxing)
...
3. No Spam
Posting the same post, no matter the intent is against the rules.
-If you have posted content, please refrain from re-posting said content within this community.
-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.
-No posting Scams/Advertisements/Phishing Links/IP Grabbers
-No Bots, Bots will be banned from the community.
...
4. No Porn/Explicit
Content
-Do not post explicit content. Lemmy.World is not the instance for NSFW content.
-Do not post Gore or Shock Content.
...
5. No Enciting Harassment,
Brigading, Doxxing or Witch Hunts
-Do not Brigade other Communities
-No calls to action against other communities/users within Lemmy or outside of Lemmy.
-No Witch Hunts against users/communities.
-No content that harasses members within or outside of the community.
...
6. NSFW should be behind NSFW tags.
-Content that is NSFW should be behind NSFW tags.
-Content that might be distressing should be kept behind NSFW tags.
...
7. Content should match the theme of this community.
-Content should be Mildly infuriating.
-At this time we permit content that is infuriating until an infuriating community is made available.
...
8. Reposting of Reddit content is permitted, try to credit the OC.
-Please consider crediting the OC when reposting content. A name of the user or a link to the original post is sufficient.
...
...
Also check out:
Partnered Communities:
Reach out to LillianVS for inclusion on the sidebar.
All communities included on the sidebar are to be made in compliance with the instance rules.
view the rest of the comments
Even if we somehow manage to create a sentient AI, it will still have to rely on the information it receives from various sensors in the car. If those sensors fail, and it doesn't have the information it needs to do the job, it could still make a mistake due to a lack of, or completely incorrect data, or if it manages to realise the data is erroneous it still could flatly refuse to work. I'd rather keep people in the loop as a final failsafe just in case that should ever happen.
I see your point on this but when should an sentient AI be able to decide for itself? What makes it different from a human by this point? Human, us rely on sensors too to react to the world. We make mistakes also, even dangerous one. I guess we just want to make sure this sentient AI is not working against us?
That's why it's layers of security. Humans have a natural instinct - usually we can tell if our eyesight is getting worse. And any mistake we make is most likely due to us not noticing something or reacting in time, something that the AI should be able to compensate for.
The only time where this is not true when we have a medical episode, like a grand Mal or something. But everyone knows safety is always relative. And we mitigate that by redundancies. Sensors will have redundancies, and we ourselves are also an additional redundancy. Heck we could also put in sensors for the occupants to monitor their vitals. There is once again a question of privacy, but really that's all we should need to protect against that.
A sentient AI, not counting any potential issues with its own sentience, would have issues with sudden failed or poorly maintained sensors. Usually when a sensor fails, it either zeros out, maxes out, or starts outputting completely erratic results.
If any of these results look the same as normal results, they can be hard for the AI to tell. We can reconcile those sensors with our own human senses and tell if they failed. A car only has its sensors to know what it needs to know, so if it fails, will it be able to know? Sure sensor redundancy helps, but there is still that minor chance that all the redundant sensors fail in a way that the AI cannot tell, and in that case the driver should be there to take over.
Again I will refer to the system of an aircraft, as even if it's a 1 in a billion chance there have been a few instances where this has happened and the autpilot nearly pitched the plane into the ground or ocean, and the plane was only saved due to the pilots takeover - in one of those cases it was due to a faulty sensor reporting that the angle of attack was too steeply pitched up, so the stick pusher mechanism tried to pitch the nose down, to save the plane, when infact it already was down. An autopilot, even an AI one will have no choice to trust its sensors as that's the only mechanism it has.
When it come to a faulty redundant sensor, the AI also has to work out which sensor to trust, and if it picks the wrong one, well you're fucked. It might not be able to work out which sensor is more trustworthy..
We keep ourselves safe with layered safety mechanisms and redundancy, including ourselves. So if anyone fails, the other can hopefully catch the failure.
Wow, I appreciate the response must have taken awhile to write.