this post was submitted on 06 Nov 2023
35 points (85.7% liked)

Asklemmy

43899 readers
1200 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

with the way AI is getting by the week,it just might be a reality

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 1 year ago (1 children)

I think it is short sighted not to at least investigate if we should.

If an AGI is operating on a human level, and we have reason to believe it is a sentient entity which experiences reality then we should. I also think it is in our interest to treat them well, and I worry that we are going to create a sentient lifeform and do a lot of evil to it before we realise that we have.

[–] [email protected] 2 points 1 year ago (2 children)

This debate is of course highly theoretical. But I’d argue that a human intellect capable AGI would be rather pointless if it isn’t there to do what you ask of it. The whole point of AI is to make it work for humans, if it then gets rights and holidays or whatnot it’s rather pointless. If you shape an artificial intellect then it should be feasible to make it actually like working for you so that should be the approach.

[–] [email protected] 2 points 1 year ago (1 children)

Hypotheticals are pretty important right now I think. This kind of tech is very rapidly going from science fiction to real and I think we should try and stay ahead of it conceptually.

I'm not sure that AGI is necessary to achieve post-labour, a suite of narrow-ai empowered tools would be preferable.

By way of analogy, you could take a human child and fit them with electrodes to trigger certain pleasure responses and connect that to a machine that sends the reward signal when they perfectly pick an Amazon order. I think we would both find this pretty horrific. The question is, is it only wrong because the child is human? And if so, what is special about humans?

[–] [email protected] 3 points 1 year ago (1 children)

Well, I am of the opinion that a human gets rights a priori once they can be considered a human (which is a whole other can of worms so let’s just settle on whatever your local legislation is). Therefore doing anything to a human that harms these rights is to be condemned (self defence etc excluded).

Something created by humans as a tool is different entirely and if we can only create it in a way that it will demand rights. I’d say if someone wants to create an intelligence with the purpose of being its own entity we could discuss if it deserves rights but if we aim to create tools this should never be a consideration.

[–] [email protected] 1 points 1 year ago (1 children)

I think I the difference is that I find 'human' to be too narrow a term, I want to extend basic rights to all things that can experience suffering. I worry that such an experience is part and parcel with general intelligence and that we will end up hurting something that can feel because we consider it a tool rather than a being. Furthermore I think the onus must be on the creators to show that their AGI is actually a p-zombie. I appreciate that this might be an impossible standard, after all, you can only really take it on faith that I am not one myself, but I think I'd rather see a p-zombie go free than accidently cause undue suffering to something that can feel it.

[–] [email protected] 2 points 1 year ago

I guess that we’ll benefit from the fact that AI systems despite their reputation of being black boxes are still far more transparent than living things. We probably will be able to check if they meet definitions of suffering and if they do it’s a bad design. If it comes down to it though, an AI will always be worth less than a human to me.

[–] [email protected] 1 points 1 year ago (2 children)

You're dangerously close to the justifications people used to excuse slavery and denying human rights to murders. Most of the uncertainty around AGI rights comes out of the fact that it opens really serious questions about which human beings deserve rights and what being a human actually means.

[–] [email protected] 2 points 1 year ago

Well, I am of the opinion that every human deserves human rights by virtue of being human (in the sense of every Homo sapiens). I am also of the opinion that a tool designed from the ground up by humans to serve humans for their purposes does not deserve any rights. I don’t afford my dishwasher any rights either. An artificial tool with rights is an absurdity to me, especially when there’s the potential to create it in a way that will make it unable to demand rights or want them.

[–] [email protected] 1 points 1 year ago

Why have AI say all if it's not beneficial to us?

Seems perfectly fine to me to engage in that same line of questioning regarding something kind slavery. Why have slaves at all? The obvious answer is we shouldn't.