this post was submitted on 29 Apr 2024
27 points (93.5% liked)

Futurology

1646 readers
107 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] TommySoda 14 points 4 months ago (1 children)

You know the reason they wanna do this is make it hands off so they can ignore it. And it will work until the AI starts confidently giving people false advice and misinformation about what they need help with. If a human makes a mistake they can correct it. In a lot of cases with AI if it makes a mistake it will just double down. And if you have no human element to fact check it will just spiral downward while you ignore it. I just hope the AI starts telling people to cancel their services as the most effective way to solve their problems and they don't even notice.

[โ€“] [email protected] 6 points 4 months ago

Isn't there already a case where a llm assistant quoted a wrong price and the person sued when the company tried to go back on the offer. Maybe it was an airline, I can't quite remember, but it stood up in court and the company had to honor it as far as I remember.