this post was submitted on 04 Aug 2023
5 points (85.7% liked)

c/Futurology

70 readers
1 users here now

A community for the studies and speculation of how far humans can advance in technology, civilization, and humanities.


Please Observe Instance Rules:

  1. Do not violate any laws, third-party rights, and/or proprietary rights.
  2. Do not harass others, be abusive, threatening, and/or harmful.
  3. Do not be needlessly defamatory and/or intentionally misleading.
  4. Do not upload without marking obscene and/or sensitive content as such.
  5. Do not promote racism, bigotry, hatred, harm, and violence of any kind.

founded 1 year ago
MODERATORS
 

From the Article:

The release of the advanced chatbot ChatGPT in 2022 got everyone talking about artificial intelligence (AI). Its sophisticated capabilities amplified concerns about AI becoming so advanced that soon we would not be able to control it. This even led some experts and industry leaders to warn that the technology could lead to human extinction.

Other commentators, though, were not convinced. Noam Chomsky, a professor of linguistics, dismissed ChatGPT as “hi-tech plagiarism”.

For years, I was relaxed about the prospect of AI’s impact on human existence and our environment. That’s because I always thought of it as a guide or adviser to humans. But the prospect of AIs taking decisions – exerting executive control – is another matter. And it’s one that is now being seriously entertained.

One of the key reasons we shouldn’t let AI have executive power is that it entirely lacks emotion, which is crucial for decision-making. Without emotion, empathy and a moral compass, you have created the perfect psychopath.

top 1 comments
sorted by: hot top controversial new old
[–] robbotlove 1 points 1 year ago

we have like 80 years of science fiction media to draw upon as to why giving AI autonomy is a really bad idea.