this post was submitted on 24 Feb 2025
140 points (94.3% liked)

Asklemmy

45322 readers
1894 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

Just wanted to prove that political diversity ain't dead. Remember, don't downvote for disagreements.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 2 days ago* (last edited 2 days ago) (3 children)

I agree, animal rights are important. I am not sure that animals are worth as much as humans morally, but even so, the argument for shrimp welfare is extremely moving. Well worth reading. It's easy to imagine shrimp are undeserving of compassion because they are small, have tiny brains, and have a silly name.

[–] [email protected] 3 points 2 days ago (1 children)

It seems pretty mind bending to morally rank organisms. By what metric do you estimate humans are more valuable than a random animal?

[–] [email protected] 4 points 1 day ago (2 children)

I believe a person is their brain, and without a brain or equivalent construct, you have no moral weight. This is why I believe it's okay to eat plants. Bacteria, too, are outside of my moral horizon. Foetuses (in the first few weeks at least) similarly are okay to abort.

By brain I don't mean intelligence, just capacity for conscious feeling. I think stupid people are just as capable of feeling pain as smart people, so both are weighted similarly morally to me.

It seems reasonable to assert that a single neural cell is not enough on its own to produce consciousness, or if so then it's hardly any. So animals with trivial neural systems are less worthy than humans too. And so on up to large mammals with developed minds in a gradient. Some animals like elephants and whales might be capable of more feeling than humans, and together with their long lifespan might be worth more QALYs than a human altogether.

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago)

I believe consciousness is a primarily intracellular, not intercellular process; though it does seem cells synchronize, even across organisms. I believe every cell thinks, but nerve cells are more specialized. This isn't just what I choose to believe, we have significant and growing evidence that this is the case. And it is clear, many parts of the body think, when you consider the extremely sophisticated tasks it performs without your conscious thought or engaging the brain at all, even though computation and perhaps even reasoning is required.

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago) (1 children)

I see how that could feel right. It doesn't make sense to me personally though.

Is consciousness different from the ability to experience? If they are different what separates them, and why is consciousness the one that gets moral weight? If they are the same then how do you count feelings? Is it measured in real time or felt time? Do psychedelics that slow time make a person more morally valuable in that moment? If it is real time, then why can you disregard felt time?

What about single celled organisms like stentor ~~coeruleus~~ roeselii that can learn? Why are they below the bar for consciousness?

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago) (1 children)

My intuition for a person's overall moral value is something like the integral of their experiences so far multiplied by their expected future QALYs. This fits my intuition of why it's okay to kill a zygote, and it's also not morally abominable to, say, slightly shorten the lifespan of somebody (especially someone already on the brink of death), or to, erm, put someone out of their misery in some cases.

I'm not terribly moved by single-celled organisms that can "learn." It's not hard to find examples of simple things which most people wouldn't consider "alive," but "learn." For instance, a block of metal can "learn" -- it responds differently based on past stresses. Or "memory foam." You could argue that a river "learns," since it can find its way around obstacles and then double down on that path. Obviously, computers "learn." Here, we mean "learn" to refer to responding differently based on what's happened to it over time, rather than the subjective conscious feeling of gaining experience.

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago) (1 children)

I was most curious to see answers to this section.

Is consciousness different from the ability to experience? If they are different what separates them, and why is consciousness the one that gets moral weight? If they are the same then how do you count feelings? Is it measured in real time or felt time? Do psychedelics that slow time make a person more morally valuable in that moment? If it is real time, then why can you disregard felt time?

I have a few answers I can kinda infer: You likely think consciousness and the ability to experience are the same. You measure those feelings in real time so 1 year is the same for any organism.

More importantly onto the other axis: Did you mean derivative of their experiences so far? (I assume by time) That would give experience rate. Integral by time would get the total. I think you wanted to end with rate*QALYs = moral value. The big question for me is: how do you personally estimate something's experience rate?

Given your previous hierarchy of humans near the top and neurons not making the cut, I assume you belive space has fundamental building blocks that can't be made smaller. Therefore it is possible to compare the amount of possible interaction in each system.

Edit: oh yeah, and at the end of all that I still don't know why brains are different from a steel beam on your moral value equation

[–] [email protected] 2 points 1 day ago (1 children)

You measure those feelings in real time so 1 year is the same for any organism.

Well, I said "integral" in the vague gesture that things can have a greater or lesser amount of experience in a given amount of time. I suppose we are looking at different x axes?

I don't know how to estimate something's experience rate, but my intuition is that every creature whose lifespan is at least one year and is visible to the naked eye has about within a factor of an order of magnitude or two the same experience rate. I think children have a greater experience rate than adults because everything is new to them; as a result, someone's maximal moral value is biased toward the earlier end of their life, like their 20s or 30s.

I still don’t know why brains are different from a steel beam

This is all presupposing that consciousness exists at all. If not, then everything's moral value is 0. If it does, then I feel confident that steel beams don't have consciousness.

[–] [email protected] 2 points 1 day ago (1 children)

Dang that last one is the most interesting to me. Also sorry for getting anal about the axis. I trust you knew what you were saying.

This is all presupposing that consciousness exists at all. If not, then everything's moral value is 0. If it does, then I feel confident that steel beams don't have consciousness.

So there is a moral hierarchy but you regard its source as only possibly existing and extremely nebulous. Given that foundation why do you stand by the validity of the hierarchy, and especially why do you say it is moral to do so?

Also I imagine that your difference in how you see the steel beam vs a brain is based on how much communication you've understood from each. Do you think our ability to understand something or someone is a reasonable way to build a moral framework? I think there are many pit falls to that approach personally, but I get its intuitive appeal.

[–] [email protected] 2 points 1 day ago (1 children)

The reason that I stand by the moral hierarchy despite the possibility that it doesn't exist at all is that I can only reason about morality under the assumption that consciousness exists. I don't know how to cause pain to a non-conscious being. To give an analogy: suppose you find out that next year there's a 50% chance that the earth will be obliterated by some cosmic event -- is this a reason to stop caring about global warming? No, because in the event that the earth is spared, we still need to solve global warming.

It is nebulous, but everything is nebulous at first until we learn more. I'm just trying to separate things that seem like pretty safe bets from things I'm less sure about. Steel beams not having consciousness seems like a safe bet. If it turns out that consciousness exists and works really really weirdly and steel beams do have consciousness, there's still no particularly good reason to believe that anything I could do to a steel beam matters to it, seeing as it lacks pain receptors.

[–] [email protected] 1 points 22 hours ago (1 children)

I see. I really appreciate you taking the time to tell me how you see things. It's been very interesting to me to read it.

I get anxious about asserting things I am not confident in. Do you ever wonder if holding onto something that you know you don't understand could end up being harmful?

I totally get not understanding how to make a steel beam happy. No reason to put effort into that.

My personal view is that matter inherently experiences since I experience and I can't find a magical hard line between me and rocks. Also I belive there is no smallest bit of matter, so there really isn't a way to compare the amount of interactions a system could have. Both are infinite. Therefore I have no real way to make a logical hierarchy. So I just interact how I can with respect for whatever I understand. I don't think elephant's are greater than ants.

Full respect for how you see things BTW. Our differences are basically faith based assumptions about the universe.

[–] [email protected] 1 points 22 hours ago* (last edited 22 hours ago) (1 children)

I get not being able to find a magical hard line between A person and a rock. I do think there is actually a clear distinction: computation. Rocks are not computing anything; brains and arguably bacteria are computing things. I think consciousness is more like computation than matter -- this fits with my intuition that you could upload someone's mind onto a computer (one neuron at a time, maintaining continuity), and that simulation of you is still you.

If you think all matter experiences equally, then shouldn't creatures with larger mass be worth more?

[–] [email protected] 1 points 21 hours ago (1 children)

I agree with you on experience is computation. To me any interaction/change is computation. A ball rolling down a hill is a complex interaction with computation. Humans are a very specific and interesting reaction that feel in cool ways.

To me more matter could be worth more if more matter meant more interactions. Yet if matter is infinitely devisable then the amount of possible interactions is infinite. If matter is continuous rather than discrete then I don't know enough about the math of infinities to compare organisms. My rudimentary knowledge says they are equivalent infinities but I'm not confident.

However, if more interactions means more worthy, then at near any scale that would benefit those with resources and those in an environment that already suits them. It would favor heat over cold. Change over stability. Anxiety over calm. Psychedelics over alcohol. Those with access to more calories. It gets really weird when applied at different scales IMO.

So in summary: I don't think we can compare how much two systems compute. If we could, then using that comparison to assign moral worth still has a ton of very odd outputs.

[–] [email protected] 1 points 21 hours ago (1 children)

Measure theory was discovered to be able to say that a rock twice as large as another rock can be accurately described as being twice as large as another rock, even if it's not discrete. (Detractors will point to the paradox that something can be cut up and reassembled to have more measure with a finite number of cuts, but the cuts have to be infinitely complex so it doesn't apply in reality.)

[–] [email protected] 1 points 20 hours ago* (last edited 20 hours ago) (1 children)

I agree a rock can be bigger than another rock. Yet 2 times infinity is not greater than infinity.

Edit: So my point is the interactions may be considered equal.

Edit: to be more pointed, measurement theory only applies to things that we know the shape of. The shape of anything in reality seems infinitely complex to me. Even if we can smooth the atoms out, there is still the EM field being perturbed by the orbiting electrons.

[–] [email protected] 1 points 19 hours ago (1 children)

Measure theory can still describe the volume of fractal shapes, for instance using squeeze theorem if you can find an iterative upper and lower bound. Just because something's surface area isn't well-defined doesn't mean the volume isn't. Similarly, the coastline problem may preclude meaningfully measuring a country's perimeter, but its (projected) area is still measurable.

[–] [email protected] 2 points 19 hours ago (1 children)

Wouldn't you agree that surface area is more important to computation and interaction than volume? Things interact at their surface. Therefore computation is infact subject to the coastline paradox?

If you actually try to measure the top surface of a country you run into the same issues as measuring the coast: infinite complexity.

Those projected volumes are practical to calculate, but must be interacted with through the surface.

[–] [email protected] 1 points 19 hours ago (1 children)

True, but I don't agree with you in the first place that number of physical interactions is a good way to measure computation (for instance, I would consider the heat-death of the universe to be the end of computation.). I also am not sure that computation is a particularly good proxy for moral weight, I just think that without it there is no consciousness.

[–] [email protected] 1 points 18 hours ago* (last edited 18 hours ago) (1 children)

First, a minor correction:

for instance, I would consider the heat-death of the universe to be the end of computation

This is an easy mistake to make, heat death is actually a very cold noninteracting state, so your point doesn't contradict physical interaction being computation. Though I trust that you really don't see interaction and computation as the same.

Edit: just looked up some heat death info, there is actually quite a range of ideas there so I guess I can't be confident on which one you meant.


In the beginning you said that experience rate was an important factor for moral weight, has that changed? If it hasn't, how do you reconcile that with:

I also am not sure that computation is a particularly good proxy for moral weight,

Also, for my own curiosity: how do you distinguish interaction from computation?

[–] [email protected] 1 points 18 hours ago (1 children)

I don't see why computation is tied to experience rate. You already pointed out examples of what appear to be higher amounts of computation in the brain not apparently tied to experience rate.

I think computation is meaningful, whereas interaction can be high-entropy and meaningless. I would probably need to consult E.T. Jaynes to have more precise definitions of the difference between these notions.

[–] [email protected] 1 points 16 hours ago* (last edited 16 hours ago)

You already pointed out examples of what appear to be higher amounts of computation in the brain not apparently tied to experience rate.

I actually would say that high interaction is high computation is high experience rate. I don't see how they are separated.

I think computation is meaningful, whereas interaction can be high-entropy and meaningless. I would probably need to consult E.T. Jaynes to have more precise definitions of the difference between these notions.

I'd be extremely curious to see how you define "meaningful" in this context. This seems to drive your moral hierarchy. Correct me if I'm wrong of course.

[–] [email protected] 3 points 2 days ago* (last edited 2 days ago) (1 children)

Well, I didn't say all animals, I said the ones we create. When you create an individual, the act places you in that individuals debt. You don't own them, you owe them. We have a duty not to harm all individuals on Earth so far as we can help it, but we have far greater responsibilities to those individuals that we bring into existence. There is no difference, morally, between forcing a child and forcing an animal to exist.

[–] [email protected] 1 points 1 day ago (1 children)

I do find topics like natalism and deathism quite fascinating. I'm not certain you're correct, but I do think what you're saying is very plausible. I lean more utilitarian, so I find it hard to justify the notion of debt to a specific entity -- after all, if you can do right by the entity you create, shouldn't it be equally good to do right by another entity?

[–] [email protected] 1 points 1 day ago (1 children)

Do you agree you have a debt to creatures you fuck into existence with your own genitalia?

[–] [email protected] 1 points 22 hours ago (1 children)

Let's keep the language chill if you don't mind.

Yes, assuming such a thing as debt exists. In a different and better world where life is inherently positive, there might not be a debt.

[–] [email protected] 1 points 21 hours ago* (last edited 21 hours ago) (1 children)

???

If you don't like how I talk, I guess we're done here, because I don't accept your terms. Be reassured at least there was no mal-intent.

Like, fuck.

[–] [email protected] 1 points 21 hours ago

Basically, I'm saying yes, one owes a debt to their children. I just don't know how to prove that the concept of "debt" exists at all morally. But assuming it does and it behaves like I think it should, then yes.

[–] [email protected] -1 points 2 days ago* (last edited 2 days ago) (1 children)

I took a look at your link. I find it reprehensible, and exactly what I mean when I say the left is incapable of having compassion and mercy. This charity is exactly the sort of thing people use to psychologically enable themselves to continue torturing animals rather than changing their behaviour.

[–] [email protected] 2 points 1 day ago

I'm not sure that Bentham's Bullhound is a leftist, he seems rather all over the place. This really isn't the sort of thing I see leftists in favour of animal welfare arguing for generally. Regardless of the specific charity recommended to solve the problem of torturous shrimp deaths, this article makes a compelling case that we must solve the problem somehow.