this post was submitted on 31 Mar 2024
0 points (50.0% liked)
Machine Learning
1772 readers
11 users here now
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Because our society is not in any way, shape, or form ready for ai. One of the often touted end uses of AI is replacing busywork jobs. That's great, if our nation (America) didn't tie liveable income, healthcare, etc. to having a job. Without better social programs, and more importantly, a society that wants better social programs, ai will end up being a net negative for humanity.
Also, llms are wrong just as much as they're right (if not more often wrong), but are always confident. And in the world we live in where people generally pick the easy road, that means all the errors made never get caught or fact checked because people think AI is infallible. And if I have to go proofread everything copilot kicks out anyways, is it really saving me that much time? Especially because authorizing it increases the odds that people just trust the program, and don't check the outputs, so some wonky ass code or statements get published.