bunchberry

joined 4 months ago
[–] bunchberry 2 points 2 months ago* (last edited 2 months ago)

I am factually correct, I am not here to "debate," I am telling you how the theory works. When two systems interact such that they become statistically correlated with one another and knowing the state of one tells you the state of the other, it is no longer valid to assign a state vector to the system subsystems that are part of the interaction individually, you have to assign it to the system as a whole. When you do a partial trace on the system individually to get a reduced density matrix for the two systems, if they are perfectly entangled, then you end with a density matrix without coherence terms and thus without interference effects.

This is absolutely entanglement, this is what entanglement is. I am not misunderstanding what entanglement is, if you think what I have described here is not entanglement but a superposition of states then you don't know what a superposition of states is. Yes, an entangled state would be in a superposition of states, but it would be a superposition of states which can only be applied to both correlated systems together and not to the individual subsystems.

Let's say R = 1/sqrt(2) and Alice sends Bob a qubit. If the qubit has a probability of 1 of being the value 1 and Alice applies the Hadamard gate, it changes to R probability of being 0 and -R probability of being 1. In this state, if Bob were to apply a second Hadamard gate, then it undoes the first Hadamard gate and so it would have a probability of 1 of being a value of 1 due to interference effects.

However, if an eavesdropper, let's call them Eve, measures the qubit in transit, because R and -R are equal distances from the origin, it would have an equal chance of being 0 or 1. Let's say it's 1. From their point of view, they would then update their probability distribution to be a probability of 1 of being the value 1 and send it off to Bob. When Bob applies the second Hadamard gate, it would then have a probability of R for being 0 and a probability of -R for being 1, and thus what should've been deterministic is now random noise for Bob.

Yet, this description only works from Eve's point of view. From Alice and Bob's point of view, neither of them measured the particle in transit, so when Bob received it, it still is probabilistic with an equal chance of being 0 and 1. So why does Bob still predict that interference effects will be lost if it is still probabilistic for him?

Because when Eve interacts with the qubit, from Alice and Bob's perspective, it is no longer valid to assign a state vector to the qubit on its own. Eve and the qubit become correlated with one another. For Eve to know the particle's state, there has to be some correlation between something in Eve's brain (or, more directly, her measuring device) and the state of the particle. They are thus entangled with one another and Alice and Bob would have to assign the state vector to Eve and the qubit taken together and not to the individual parts.

Eve and the qubit taken together would have a probability distribution of R for the qubit being 0 and Eve knowing the qubit is 0, and a probability of -R of the qubit being 1 and Eve knowing the qubit is 1. There is still interference effects but only of the whole system taken together. Yet, Bob does not receive Eve and the qubit taken together. He receives only the qubit, so this probability distribution is no longer applicable to the qubit.

He instead has to do a partial trace to trace out (ignore) Eve from the equation to know how his qubit alone would behave. When he does this, he finds that the probability distribution has changed to 0.5 for 0 and 0.5 for 1. In the density matrix representation, you will see that the density matrix has all zeroes for the coherences. This is a classical probability distribution, something that cannot exhibit interference effects.

Bob simply cannot explain why his qubit loses its interference effects by Eve measuring it without Bob taking into account entanglement, at least within the framework of quantum theory. That is just how the theory works. The explanation from Eve's perspective simply does not work for Bob in quantum mechanics. Reducing the state vector simultaneously between two different perspectives is known as an objective collapse model and makes different statistical predictions than quantum mechanics. It would not merely be an alternative interpretation but an alternative theory.

Eve explains the loss of coherence due to her reducing the state vector due to seeing a definite outcome for the qubit, and Bob explains the loss of coherence due to Eve becoming entangled with the qubit which leads to decoherence as doing a partial trace to trace out (ignore) Eve gives a reduced density matrix for the qubit whereby the coherence terms are zero.

[–] bunchberry 3 points 2 months ago* (last edited 2 months ago)

Schrödinger was not "rejecting" quantum mechanics, he was rejecting people treating things described in a superposition of states as literally existing in "two places at once." And Schrödinger's argument still holds up perfectly. What you are doing is equating a very dubious philosophical take on quantum mechanics with quantum mechanics itself, as if anyone who does not adhere to this dubious philosophical take is "denying quantum mechanics." But this was not what Schrödinger was doing at all.

What you say here is a popular opinion, but it just doesn't make any sense if you apply any scrutiny to it, which is what Schrödinger was trying to show. Quantum mechanics is a statistical theory where probability amplitudes are complex-valued, so things can have a -100% chance of occurring, or even a 100i% chance of occurring. This gives rise to interference effects which are unique to quantum mechanics. You interpret what these probabilities mean in physical reality based on how far they are away from zero (the further from zero, the more probable), but the negative signs allow for things to cancel out in ways that would not occur in normal probability theory, known as interference effects. Interference effects are the hallmark of quantum mechanics.

Because quantum probabilities have this difference, some people have wondered if maybe they are not probabilities at all but describe some sort of physical entity. If you believe this, then when you describe a particle as having a 50% probability of being here and a 50% probability of being there, then this is not just a statistical prediction but there must be some sort of "smeared out" entity that is both here and there simultaneously. Schrödinger showed that believing this leads to nonsense as you could trivially set up a chain reaction that scales up the effect of a single particle in a superposition of states to eventually affect a big system, forcing you to describe the big system, like a cat, in a superposition of states. If you believe particles really are "smeared out" here and there simultaneously, then you have to believe cats can be both "smeared out" here and there simultaneously.

Ironically, it was Schrödinger himself that spawned this way of thinking. Quantum mechanics was originally formulated without superposition in what is known as matrix mechanics. Matrix mechanics is complete, meaning, it fully makes all the same predictions as traditional quantum mechanics. It is a mathematically equivalent theory. Yet, what is different about it is that it does not include any sort of continuous evolution of a quantum state. It only describes discrete observables and how they change when they undergo discrete interactions.

Schrödinger did not like this on philosophical grounds due to the lack of continuity. There were discrete "gaps" between interactions. He criticized it saying that "I do not believe that the electron hops about like a flea" and came up with his famous wave equation as a replacement. This wave equation describes a list of probability amplitudes evolving like a wave in between interactions, and makes the same predictions as matrix mechanics. People then use the wave equation to argue that the particle literally becomes smeared out like a wave in between interactions.

However, Schrödinger later abandoned this point of view because it leads to nonsense. He pointed in one of his books that while his wave equation gets rid of the gaps in between interactions, it introduces a new gap in between the wave and the particle, as the moment you measure the wave it "jumps" into being a particle randomly, which is sometimes called the "collapse of the wave function." This made even less sense because suddenly there is a special role for measurement. Take the cat example. Why doesn't the cat's observation of this wave not cause it to "collapse" but the person's observation does? There is no special role for "measurement" in quantum mechanics, so it is unclear how to even answer this in the framework of quantum mechanics.

Schrödinger was thus arguing to go back to the position of treating quantum mechanics as a theory of discrete interactions. There are just "gaps" between interactions we cannot fill. The probability distribution does not represent a literal physical entity, it is just a predictive tool, a list of probabilities assigned to predict the outcome of an experiment. If we say a particle has a 50% chance of being here or a 50% chance of being there, it is just a prediction of where it will be if we were to measure it and shouldn't be interpreted as the particle being literally smeared out between here and there at the same time.

There is no reason you have to actually believe particles can be smeared out between here and there at the same time. This is a philosophical interpretation which, if you believe it, it has an enormous amount of problems with it, such as what Schrödinger pointed out which ultimately gets to the heart of the measurement problem, but there are even larger problems. Wigner had also pointed out a paradox whereby two observers would assign different probability distributions to the same system. If it is merely probabilities, this isn't a problem. If I flip a coin and look at the outcome and it's heads, I would say it has a 100% chance of being heads because I saw it as heads, but if I asked you and covered it up so you did not see it, you would assign a 50% probability of it being heads or tails. If you believe the wave function represents a physical entity, then you could setup something similar in quantum mechanics whereby two different observers would describe two different waves, and so the physical shape of the wave would have to differ based on the observer.

There are a lot more problems as well. A probability distribution scales up in terms of its dimensions exponentially. With a single bit, there are two possible outcomes, 0 and 1. With two bits, there's four possible outcomes, 00, 01, 10, and 11. With three bits, eight outcomes. With four bits, sixteen outcomes. If we assign a probability amplitude to each possible outcome, then the number of degrees of freedom grows exponentially the more bits we have under consideration.

This is also true in quantum mechanics for the wave function, since it is again basically a list of probability amplitudes. If we treat the wave function as representing a physical wave, then this wave would not exist in our four-dimensional spacetime, but instead in an infinitely dimensional space known as a Hilbert space. If you want to believe the universe actually physically made up of infinitely dimensional waves, have at ya. But personally, I find it much easier to just treat a probability distribution as, well, a probability distribution.

[–] bunchberry 2 points 2 months ago* (last edited 2 months ago) (1 children)

What is it then? If you say it's a wave, well, that wave is in Hilbert space which is infinitely dimensional, not in spacetime which is four dimensional, so what does it mean to say the wave is "going through" the slit if it doesn't exist in spacetime? Personally, I think all the confusion around QM stems from trying to objectify a probability distribution, which is what people do when they claim it turns into a literal wave.

To be honest, I think it's cheating. People are used to physics being continuous, but in quantum mechanics it is discrete. Schrodinger showed that if you take any operator and compute a derivative, you can "fill in the gaps" in between interactions, but this is just purely metaphysical. You never see these "in between" gaps. It's just a nice little mathematical trick and nothing more. Even Schrodinger later abandoned this idea and admitted that trying to fill in the gaps between interactions just leads to confusion in his book Nature and the Greeks and Science and Humanism.

What's even more problematic about this viewpoint is that Schrodinger's wave equation is a result of a very particular mathematical formalism. It is not actually needed to make correct predictions. Heisenberg had developed what is known as matrix mechanics whereby you evolve the observables themselves rather than the state vector. Every time there is an interaction, you apply a discrete change to the observables. You always get the right statistical predictions and yet you don't need the wave function at all.

The wave function is purely a result of a particular mathematical formalism and there is no reason to assign it ontological reality. Even then, if you have ever worked with quantum mechanics, it is quite apparent that the wave function is just a function for picking probability amplitudes from a state vector, and the state vector is merely a list of, well, probability amplitudes. Quantum mechanics is probabilistic so we assign things a list of probabilities. Treating a list of probabilities as if it has ontological existence doesn't even make any sense, and it baffles me that it is so popular for people to do so.

This is why Hilbert space is infinitely dimensional. If I have a single qubit, there are two possible outcomes, 0 and 1. If I have two qubits, there are four possible outcomes, 00, 01, 10, and 11. If I have three qubits, there are eight possible outcomes, 000, 001, 010, 011, 100, 101, 110, and 111. If I assigned a probability amplitude to each event occurring, then the degrees of freedom would grow exponentially as I include more qubits into my system. The number of degrees of freedom are unbounded.

This is exactly how Hilbert space works. Interpreting this as a physical infinitely dimensional space where waves really propagate through it just makes absolutely no sense!

[–] bunchberry 1 points 2 months ago* (last edited 2 months ago)

It is weird that you start by criticizing our physical theories being descriptions of reality then end criticizing the Copenhagen interpretation, since this is the Copenhagen interpretation, which says that physics is not about describing nature but describing what we can say about nature. It doesn't make claims about underlying ontological reality but specifically says we cannot make those claims from physics and thus treats the maths in a more utilitarian fashion.

The only interpretation of quantum mechanics that actually tries to interpret it at face value as a theory of the natural world is relational quantum mechanics which isn't that popular as most people dislike the notion of reality being relative all the way down. Almost all philosophers in academia define objective reality in terms of something being absolute and point-of-view independent, and so most academics struggle to comprehend what it even means to say that reality is relative all the way down, and thus interpreting quantum mechanics as a theory of nature at face-value is actually very unpopular.

All other interpretations either: (1) treat quantum mechanics as incomplete and therefore something needs to be added to it in order to complete it, such as hidden variables in the case of pilot wave theory or superdeterminism, or a universal psi with some underlying mathematics from which to derive the Born rule in the Many Worlds Interpretation, or (2) avoid saying anything about physical reality at all, such as Copenhagen or QBism.

Since you talk about "free will," I suppose you are talking about superdeterminism? Superdeterminism works by pointing out that at the Big Bang, everything was localized to a single place, and thus locally causally connected, so all apparent nonlocality could be explained if the correlations between things were all established at the Big Bang. The problem with this point of view, however, is that it only works if you know the initial configuration of all particles in the universe and a supercomputer powerful to trace them out to modern day.

Without it, you cannot actually predict any of these correlations ahead of time. You have to just assume that the particles "know" how to correlate to one another at a distance even though you cannot account for how this happens. Mathematically, this would be the same as a nonlocal hidden variable theory. While you might have a nice underlying philosophical story to go along with it as to how it isn't truly nonlocal, the maths would still run into contradictions with special relativity. You would find it difficult to construe the maths in such a way that the hidden variables would be Lorentz invariant.

Superdeterministic models thus struggle to ever get off the ground. They only all exist as toy models. None of them can reproduce all the predictions of quantum field theory, which requires more than just accounting for quantum mechanics, but doing so in a way that is also compatible with special relativity.

[–] bunchberry 2 points 2 months ago (1 children)

You can break elliptic curve cryptography with quantum computers. Post-quantum cryptography is instead based on something called the lattice problem, sometimes called lattice-based cryptography.

[–] bunchberry 3 points 2 months ago

Personally, I think there is a much bigger issue with the quantum internet that is often not discussed and it's not just noise.

Imagine, for example, I were to offer you two algorithms. One can encrypt things so well that it would take a hundred trillion years for even a superadvanced quantum computer to break the encryption, and it almost has no overhead. The other is truly unbreakable even in an infinite amount of time, but it has a huge amount of overhead to the point that it will cut your bandwidth in half.

Which would you pick?

In practice, there is no difference between an algorithm that cannot be broken for trillions of years, and an algorithm that cannot be broken at all. But, in practice, cutting your internet bandwidth in half is a massive downside. The tradeoff just isn't worth it.

All quantum "internet" algorithms suffer from this problem. There is always some massive practical tradeoff for a purely theoretical benefit. Even if we make it perfectly noise-free and entirely solve the noise problem, there would still be no practical reason at all to adopt the quantum internet.

[–] bunchberry 3 points 2 months ago

The problem with the one-time pads is that they're also the most inefficient cipher. If we switched to them for internet communication (ceteris paribus), it would basically cut internet bandwidth in half overnight. Even moreso, it's a symmetric cipher, and symmetric ciphers cannot be broken by quantum computers. Ciphers like AES256 are considered still quantum-computer-proof. This means that you would be cutting the internet bandwidth in half for purely theoretical benefits that people wouldn't notice in practice. The only people I could imagine finding this interesting are overly paranoid governments as there are no practical benefits.

It also really isn't a selling point for quantum key distribution that it can reliably detect an eavesdropper. Modern cryptography does not care about detecting eavesdroppers. When two people are exchanging keys with a Diffie-Hellman key exchange, eavesdroppers are allowed to eavesdrop all they wish, but they cannot make sense of the data in transit. The problem with quantum key distribution is that it is worse than this, it cannot prevent an eavesdropper from seeing the transmitted key, it just discards it if they do. This to me seems like it would make it a bit harder to scale, although not impossible, because anyone can deny service just by observing the packets of data in transit.

Although, the bigger issue that nobody seems to talk about is that quantum key distribution, just like the Diffie-Hellman algorithm, is susceptible to a man-in-the-middle attack. Yes, it prevents an eavesdropper between two nodes, but if the eavesdropper sets themselves up as a third node pretending to be different nodes when queried from either end, they could trivially defeat quantum key distribution. Although, Diffie-Hellman is also susceptible to this, so that is not surprising.

What is surprising is that with Diffie-Hellman (or more commonly its elliptic curve brethren), we solve this using digital signatures which are part of public key infrastructure. With quantum mechanics, however, the only equivalent to digital signatures relies on the No-cloning Theorem. The No-cloning Theorem says if I gave you a qubit and you don't know it is prepared, nothing you can do to it can tell you its quantum state, which requires knowledge of how it was prepared. You can use the fact only a single person can be aware of its quantum state as a form of a digital signature.

The thing is, however, the No-cloning Theorem only holds true for a single qubit. If I prepared a million qubits all the same way and handed them to you, you could derive its quantum state by doing different measurements on each qubit. Even though you could use this for digital signatures, those digital signatures would have to be disposable. If you made too many copies of them, they could be reverse-engineered. This presents a problem for using them as part of public key infrastructure as public key infrastructure requires those keys to be, well, public, meaning anyone can take a copy, and so infinite copy-ability is a requirement.

This makes quantum key distribution only reliable if you combine it with quantum digital signatures, but when you do that, it no longer becomes possible to scale it to some sort of "quantum internet." It, again, might be something useful an overly paranoid government could use internally as part of their own small-scale intranet, but it would just be too impractical without any noticeable benefits for anyone outside of that. As, again, all this is for purely theoretical benefits, not anything you'd notice in the real world, as things like AES256 are already considered uncrackable in practice.

[–] bunchberry 1 points 2 months ago (2 children)

Entanglement plays a key role.

Any time you talk about "measurement" this is just observation, and the result of an observation is to reduce the state vector, which is just a list of complex-valued probability amplitudes. The fact they are complex numbers gives rise to interference effects. When the eavesdropper observes definite outcome, you no longer need to treat it as probabilistic anymore, you can therefore reduce the state vector by updating your probabilities to simply 100% for the outcome you saw. The number 100% has no negative or imaginary components, and so it cannot exhibit interference effects.

It is this loss of interference which is ultimately detectable on the other end. If you apply a Hadamard gate to a qubit, you get a state vector that represents equal probabilities for 0 or 1, but in a way that could exhibit interference with later interactions. Such as, if you applied a second Hadamard gate, it would return to its original state due to interference. If you had a qubit that was prepared with a 50% probability of being 0 or 1 but without interference terms (coherences), then applying a second Hadamard gate would not return it to its original state but instead just give you a random output.

Hence, if qubits have undergone decoherence, i.e., if they have lost their ability to interfere with themselves, this is detectable. Obvious example is the double-slit experiment, you get real distinct outcomes by a change in the pattern on the screen if the photons can interfere with themselves or if they cannot. Quantum key distribution detects if an observer made a measurement in transit by relying on decoherence. Half the qubits a Hadamard gate is randomly applied, half they are not, and which it is applied to and which it is not is not revealed until after the communication is complete. If the recipient receives a qubit that had a Hadamard gate applied to it, they have to apply it again themselves to cancel it out, but they don't know which ones they need to apply it to until the full qubits are transmitted and this is revealed.

That means at random, half they receive they need to just read as-is, and another half they need to rely on interference effects to move them back into their original state. Any person who intercepts this by measuring it would cause it to decohere by their measurement and thus when the recipient applies the Hadamard gate a second time to cancel out the first, they get random noise rather than it actually cancelling it out. The recipient receiving random noise when they should be getting definite values is how you detect if there is an eavesdropper.

What does this have to do with entanglement? If we just talk about "measuring a state" then quantum mechanics would be a rather paradoxical and inconsistent theory. If the eavesdropper measured the state and updated the probability distribution to 100% and thus destroyed its interference effects, the non-eavesdroppers did not measure the state, so it should still be probabilistic, and at face value, this seems to imply it should still exhibit interference effects from the non-eavesdroppers' perspective.

A popular way to get around this is to claim that the act of measurement is something "special" which always destroys the quantum probabilities and forces it into a definite state. That means the moment the eavesdropper makes the measurement, it takes on a definite value for all observers, and from the non-eavesdroppers' perspective, they only describe it still as probabilistic due to their ignorance of the outcome. At that point, it would have a definite value, but they just don't know what it is.

However, if you believe that, then that is not quantum mechanics and in fact makes entirely different statistical predictions to quantum mechanics. In quantum mechanics, if two systems interact, they become entangled with one another. They still exhibit interference effects as a whole as an entangled system. There is no "special" interaction, such as a measurement, which forces a definite outcome. Indeed, if you try to introduce a "special" interaction, you get different statistical predictions than quantum mechanics actually makes.

This is because in quantum mechanics, every interaction leads to growing the scale of entanglement, and so the interference effects never go away, just spread out. If you introduce a "special" interaction such as a measurement whereby it forces things into a definite value for all observers, then you are inherently suggesting there is a limitation to this scale of entanglement. There is some cut-off point whereby interference effects can no longer be scaled passed that, and because we can detect if a system exhibits interference effects or not (that's what quantum key distribution is based on), then such an alternative theory (called an objective collapse model) would necessarily have to make differ from quantum mechanics in its numerical predictions.

The actual answer to this seeming paradox is provided by quantum mechanics itself: entanglement. When the eavesdropper observes the qubit in transit, for the perspective of the non-eavesdroppers, the eavesdropper would become entangled with the qubit. It then no longer becomes valid in quantum mechanics to assign the state vector to the eavesdropper and the qubit separately, but only them together as an entangled system. However, the recipient does not receive both the qubit and the eavesdropper, they only receive the qubit. If they want to know how the qubit behaves, they have to do a partial trace to trace out (ignore) the eavesdropper, and when they do this, they find that the qubit's state is still probabilistic, but it is a probability distribution with only terms between 0% and 100%, that is to say, no negatives or imaginary components, and thus it cannot exhibit interference effects.

Quantum key distribution does indeed rely on entanglement as you cannot describe the algorithm consistently from all reference frames (within the framework of quantum mechanics and not implicitly abandoning quantum mechanics for an objective collapse theory) without taking into account entanglement. As I started with, the reduction of the wave function, which is a first-person description of an interaction (when there are 2 systems interacting and one is an observer describing the second), leads to decoherence. The third-person description of an interaction (when there are 3 systems and one is on the "outside" describing the other two systems interacting) is entanglement, and this also leads to decoherence.

You even say that "measurement changes the state", but how do you derive that without entanglement? It is entanglement between the eavesdropper and the qubit that leads to a change in the reduced density matrix of the qubit on its own.

[–] bunchberry 2 points 2 months ago* (last edited 2 months ago)

i’d agree that we don’t really understand consciousness. i’d argue it’s more an issue of defining consciousness and what that encompasses than knowing its biological background.

Personally, no offense, but I think this a contradiction in terms. If we cannot define "consciousness" then you cannot say we don't understand it. Don't understand what? If you have not defined it, then saying we don't understand it is like saying we don't understand akokasdo. There is nothing to understand about akokasdo because it doesn't mean anything.

In my opinion, "consciousness" is largely a buzzword, so there is just nothing to understand about it. When we actually talk about meaningful things like intelligence, self-awareness, experience, etc, I can at least have an idea of what is being talked about. But when people talk about "consciousness" it just becomes entirely unclear what the conversation is even about, and in none of these cases is it ever an additional substance that needs some sort of special explanation.

I have never been convinced of panpsychism, IIT, idealism, dualism, or any of these philosophies or models because they seem to be solutions in search of a problem. They have to convince you there really is a problem in the first place, but they only do so by talking about consciousness vaguely so that you can't pin down what it is, which makes people think we need some sort of special theory of consciousness, but if you can't pin down what consciousness is then we don't need a theory of it at all as there is simply nothing of meaning being discussed.

They cannot justify themselves in a vacuum. Take IIT for example. In a vacuum, you can say it gives a quantifiable prediction of consciousness, but "consciousness" would just be defined as whatever IIT is quantifying. The issue here is that IIT has not given me a reason to why I should care about them quantifying what they are quantifying. There is a reason, of course, it is implicit. The implicit reason is that what they are quantifying is the same as the "special" consciousness that supposedly needs some sort of "special" explanation (i.e. the "hard problem"), but this implicit reason requires you to not treat IIT in a vacuum.

[–] bunchberry 1 points 2 months ago

Bruh. We literally don’t even know what consciousness is.

You are starting from the premise that there is this thing out there called "consciousness" that needs some sort of unique "explanation." You have to justify that premise. I do agree there is difficulty in figuring out the precise algorithms and physical mechanics that the brain uses to learn so efficiently, but somehow I don't think this is what you mean by that.

We don’t know how anesthesia works either, so he looked into that and the best he got was it interrupts a quantom wave collapse in our brains

There is no such thing as "wave function collapse." The state vector is just a list of probability amplitudes and you reduce those list of probability amplitudes to a definite outcome because you observed what that outcome is. If I flip a coin and it has a 50% chance of being heads and a 50% chance of being tails, and it lands on tails, I reduce the probability distribution to 100% probability for tails. There is no "collapse" going on here. Objectifying the state vector is a popular trend when talking about quantum mechanics but has never made any sense at all.

So maybe Roger Penrose just wasted his retirement on this passion project?

Depends on whether or not he is enjoying himself. If he's having fun, then it isn't a waste.

[–] bunchberry 2 points 2 months ago (1 children)

It is only continuous because it is random, so prior to making a measurement, you describe it in terms of a probability distribution called the state vector. The bits 0 and 1 are discrete, but if I said it was random and asked you to describe it, you would assign it a probability between 0 and 1, and thus it suddenly becomes continuous. (Although, in quantum mechanics, probability amplitudes are complex-valued.) The continuous nature of it is really something epistemic and not ontological. We only observe qubits as either 0 or 1, with discrete values, never anything in between the two.

[–] bunchberry 0 points 2 months ago

The only observer of the mind would be an outside observer looking at you. You yourself are not an observer of your own mind nor could you ever be. I think it was Feuerbach who originally made the analogy that if your eyeballs evolved to look inwardly at themselves, then they could not look outwardly at the outside world. We cannot observe our own brains as they only exist to build models of reality, if our brains had a model of itself it would have no room left over to model the outside world.

We can only assign an object to be what is "sensing" our thoughts through reflection. Reflection is ultimately still building models of the outside world but the outside world contains a piece of ourselves in a reflection, and this allows us to have some limited sense of what we are. If we lived in a universe where we somehow could never leave an impression upon the world, if we could not see our own hands or see our own faces in the reflection upon a still lake, we would never assign an entity to ourselves at all.

We assign an entity onto ourselves for the specific purpose of distinguishing ourselves as an object from other objects, but this is not an a priori notion ("I think therefore I am" is lazy sophistry). It is an a posteriori notion derived through reflection upon what we observe. We never actually observe ourselves as such a thing is impossible. At best we can over reflections of ourselves and derive some limited model of what "we" are, but there will always be a gap between what we really are and the reflection of what we are.

Precisely what is "sensing your thoughts" is yourself derived through reflection which inherently derives from observation of the natural world. Without reflection, it is meaningless to even ask the question as to what is "behind" it. If we could not reflect, we would have no reason to assign anything there at all. If we do include reflection, then the answer to what is there is trivially obvious: what you see in a mirror.

view more: next ›