Im fully willing to believe I just don’t “get it” but I took a pretty deep dive into quantum computing and the underlying mechanics and I kind of got the sense (with QC) that nobody really knows what they are talking about. I got this feeling so strongly that stopped studying the topic all together.
I’m probably way off base and I’m probably missing some insights that I could get by going to school or something but that’s was just my experience with the subject.
> I’m probably way off base and I’m probably missing some insights that I could get by going to school
A school would usually teach the "shut up (about philosophy) and calculate" approach. These philosophical problems about the meaning of quantum mechanics have been with us for 100 years, and mainstream physics sees them as too hard or even intractable, and thus as waste of time.
These debates over the interpretation of Quantum Mechanics (i.e. what ultimately happens when a “measurement” takes place) are important but don’t bear on the effectiveness of quantum computing. Regardless of your favorite interpretation (almost) everyone agrees that quantum computers should work and be able to do things classical computers cannot.
The interpretations of what the math is saying happens a varied and sometimes contradictory.
We can predict what's going to happen extremely well, we just can't tell the story of what's happening. And there's been a century of trying to avoid the weirdness and failing. The problem might just be our brains evolved in a world that behaves so much differently that we can't understand.
Do I get this right? Wave function collapse due to measurements is not real, the wave function evolves unitarily all the time. But as quantum states get amplified into the macroscopic world, superposition states are somehow amplified asymmetrically which makes it look like wavefunction collapse.
But isn’t it conceivable, because the original quantum state contains probabilities of different outcomes, that one imprint might correspond to “up” and another to “down,” [...] [Zurek’s theory] predicts that all the imprints must be identical.
Does this not imply that there is an asymmetry, one half of the state gets imprinted, the other half neglected? This however also raises the question about the basis, what is a superposition and what is not depends on the choice of basis. Is there a special basis just as pointer states are somehow special?
Highly recommend looking at Jacob Barandes’ formulation of quantum mechanics as non-Markovian stochastic processes. It was the first introduction to quantum mechanics I could actually follow.
It doesn’t. Decoherence is the technical step in the Everett picture defining what a “classical branch” even is and explaining how the state vector branches. Every claim that “Decoherence” somehow offers a distinct interpretation to Everett is pure confusion.
The article asks the same question in the last part, wondering whether it's just randomly selected. MWI proponents have always argued decoherence leads to the entire world being put into superposition as decoherence just spreads entanglement to the environment. The math never says entanglement destroys superposition beyond a certain point of complexity (many different entangled systems forming the environment).
The author does say the approach is a combination of Copenhagen and MWI, removing the outlandish parts of both. Seems to preserve the randomness of the former though.
> MWI proponents have always argued decoherence leads to the entire world being put into superposition as decoherence just spreads entanglement to the environment.
Well, duh. It's not like classic objects actually exist, or the classical/quantum divide: everything is quantum, including the "observers". The "classical observer" is a crude approximation that breaks down to a pointy enough question. Just like shorting the perfect battery (with zero internal resistance) with a perfect wire (with zero external resistance) — this scenario is not an approximation of any possible real scenario so it's paradoxicality (infinite current!) is irrelevant.
Random is a very interesting concept. In relation to nature we seem to use "random" as anything we can't or are currently unable to model.
To call something random doesn't mean it's impossible to model, in fact all sorts of natural facts seemed random one day before being covered by a model. One very relatable example example is the motion of stars in the the night sky, which seemed random for ages, until the Copernican revolution.
The fact we have access to random() function in programming seems to trip many people. random() is a particular model implementation of random, but stuff in nature isn't random().
My point is, using "just random" to do work in any scientific explanation is a clutch.
In science randomness is usually used to abstract over a large number of possible paths that result in some outcome without having to reason individually about any specific path or all such paths.
It does not have to mean something inherently non-deterministic or something that can't be modelled, although it certainly is the case that if something is inherently non-deterministic then it would necessarily have to be modelled randomly. Modelling things as a random process is very useful even in cases where the underlying phenomenon has a fully understood and deterministic model; a simple example of this would be chess. It's an entirely deterministic game with perfect information that is fully understood, but nevertheless all the best chess engines model positions probabilistically and use randomness as part of their search.
There's disagreement on this. You seem to just be saying that brute facts or brute contingencies don't exist, but I suspect most scientists would disagree with that.
The use of "random" as explanation or characterization in science has certainly spanned everything from "we don't know", to "there is inherent indivisible physical randomness".
And I would agree, in the latter case it is a crutch. A postulate that something gets decided by no mechanisms whatsoever (randomness obeying a distribution still leaves the unexplained "choice").
It is remarkable that people still suggest the latter, when the theory, both in theory and experiment, doesn't require a physical choice at all (even if we experience a choice, that experience is explained without the universe making a choice).
It is not incomplete to say that something does not require explanation, nor is it saying it's "magic". It is a cost that your model might incur, that's it.
In this paper a plurality of physicists stated that they felt that the initial conditions of the universe are brute facts that warrant no further explanation. This is not "our model doesn't yet account for it", it's "there is no explanation to be given".
To me, the fact that quantum mechanics is intrinsically "random" and unknowable beforehand, is what makes living bearable in this universe as a sentient being. If we, two legged viruses that we are, could reach a level of understanding that could show the universe to be fully deterministic and every future state to be knowable given that you know the current states, then this human condition would be impossible to stand. I love the fact that we just can't predict the future. It's what makes existing be a good thing instead of a bad one.
> How, for example, are we supposed to think about the domain in which all possibilities still exist before decoherence? How “real” is it?
The quantum function is the real object. Little balls we like to imagine the particles as are just perception of quantum functions very narrowed down by entangling with macroscopic objects. The way we measure anything is through the entanglement between the measured entity and our macroscopic instruments.
> None of the leading interpretations of quantum theory are very convincing. They ask us to believe, for example, that the world we experience is fundamentally divided from the subatomic realm it’s built from. Or that there is a wild proliferation of parallel universes, or that a mysterious process causes quantumness to spontaneously collapse.
Actually, the "many worlds" "interpretation", simply treats the highly successful equations as meaning what they say.
And it is misnamed. The field equations describe a highly interconnected "web universe" of "tangles" (what I call spans of entangled interactions) and "spangles". (My shorthand for superpositions, i.e. disjoint interactions of particles. Think of all the alternate lines leading from and two distinguishable states, like star patterns.) Basically, a graph of union and intersection relations where all combinations, individually and en masse, are determined exactly by the laws of conservation.
That's an amazingly good property for a theory. And we have it.
By including all consistent versions, no external information is required by the theory. It is informationally complete. A successful objective explanation. With deep experimental support that entanglement and superposition actually exist, because their interactions are easily testable.
In fact, entanglement doesn't "violate" locality, it is the more general case which explains locality. Locality is just tightly coupled entanglement/interaction. Not a fundamental constraint on connections. There is no fundamental "distance", just loose and dense connections. Locality is just what we see wherever there are patterns of dense connections. They are an effect, not a constraint.
Even in the classical world of large (highly tangled) objects, we take it for granted that dependent objects can separate over arbitrarily vast dimensions of space and time and yet return together. If that isn't entanglement over vast distances, what is it? It is a basic property of classical physics. Quantum mechanics reveals more subtlety in those maintained connections, including interactions between connections, but it didn't originate them.
Forces disappear. They become passive in an interesting way. Histories where information cancel, leave structured distribution patterns behind, which to us look like forces. Cancellation is just information being conserved. Not an active force. But the results appear active.
In a similar way to how the evolutionary umbrella seems very smart and creative, when really, it is just poorly adapted individual creatures independently cancelling themselves out blindly, leaving a distributional improvement behind.
There is no additional information needed to explain the effect of quantum "collapse" because it is already explained by the fast bifurcation of disjoint tangles when lots of particles interact in an unorganized manner. It is thermodynamics being thermodynamics.
Anyone attempting to invent a mechanism for "collapse" is like someone trying to explain why the spherical Earth appears "flat" by introducing additional speculative theories. Despite the spherical world theory already explaining why it looks flat locally.
And the only reason to not take the experimentally verified field equations as a plain reading, is the result is "too big" for someone's imagination.
Our everyday experience doesn't limit reality, despite humans having trouble with theories that reveal a bigger reality, over and over and over.
Bluntly: The total field equations preserve information - that is the plain implication and guarantee for having both unions (tangles) and intersections (spangles) of interactions.
Anything else requires a universal firehose of magically appearing information to choose collapses, i.e. particular interactions, in order to explain something already explained. In other words, dressed up voodoo. And by "re-complicating", uh, "re-explaining" the already explained, introduces a ridiculous new puzzle: Where does all that pervasively intrusive relentless injection of information (that determines every single extricable particle interaction!), come from? (Occam is spinning like a particle accelerator in his grave.)
Saying it "Just Happens" is like someone "explaining" their pet version of a creator with "Just Is". It is a psychological non-taulogy for "Don't Ask Questions".
The part that I have trouble wrapping around with many worlds interpretation is how I as an observer end up in one of the many bifurcations. Any links you can share that will help me with understanding that is welcome!
The Stanford Encyclopedia of Philosophy (https://plato.stanford.edu/entries/qm-manyworlds/) goes into this in some depth, and it seems like the right way to think about it is say that "I" in one branch is a different entity than the "I" in a different branch. I have somehow not been able to grok it yet.
And I agree about the naming. I really dislike the name "many worlds interpretation", which seems to imply that we have to postulate the existence of these additional worlds, whereas in fact they are branches of the wavefunction exactly predicted by standard quantum mechanics.
The problem with Many Worlds is that it doesn't place a bound on the number of worlds, so you can't derive the Born Rule from it.
That's quite a serious issues. And arguments against that - like Self-Locating Uncertainty, or Zurek's Envariance - look suspiciously circular if you pull them apart.
There's also the issue that if you don't have a mechanism that constrains probability, you can't say anything about the common mechanism of any of the worlds you're in. Your world may be some kind of lottery-winning statistical freak world which happens to have very unusual properties, and generalising from them is absolutely misleading.
There's no way of testing that, so you end up with something unfalsifiable.
> The problem with Many Worlds is that it doesn't place a bound on the number of worlds, so you can't derive the Born Rule from it.
I have no idea what this means.
Is there a bound on anything in reality, in terms of scale? Beyond its own laws?
I am reminded of how often in history, too much time, or too much scale, were unsuccessful arguments against many theories we accept today. Those critiques died without any need for special arguments, because they don't have a logical basis.
Also, there are not a number of many "worlds". That is a reflection of poor naming. There is an interleaving of all interactions, so if you zoom out, a smeared landscape across all configurations, from the plank scale up.
Because the connections involve both intersection (entanglement) and union (alternate paths), we get bifurcation of classical sized paths (dense entanglements), while the individual particles continue unconcerned by how they appear to create different classical histories at large scale.
And yes it is experimentally validated. This is the theory that everyone accepts in the lab, even as larger scales of experiment continue to progress.
But some people have difficulty believing/visualizing that it continues to work at larger scales. Despite no scale limitation in the theory, no scale related violations ever suggested experimentally, and the strong likely that scale limitations would produce new physics in at-scale observations of our cosmos if they did exist.
> The part that I have trouble wrapping around with many worlds interpretation is how I as an observer end up in one of the many bifurcations.
Pour water down a hill. Water clings to water, and we have hills that already have lots of correlations. We get streams that break up into multiple streams.
How did one stream end up where it is? It seems like a good question, but it is circular. The stream is defined by where it is. You are here (in some circumstance), because the version of you in this circumstance is you.
A transporter accident that creates several versions of you, on several planets with difference colors, doesn't need to explain to each version how they ended up at a planet with their color. Even if for a particular copy, it seems like there should be an answer why they showed up on a planet of a particular specific color. The "why" is just, all paths were taken.
What you said here makes sense. Forgive me, but I have trouble even articulating what it is that I don’t understand correctly.
Maybe what I meant was this: if I perform a quantum experiment where the spin measurement of an electron could be spin up or spin down, the future me would end up in one of two branches: I measure spin up, or I measure spin down. There wouldn’t be any possible world where I measure a superposition of spin up and spin down, because such a a state is going to decohere rapidly. This makes sense. What I’m unable to grasp is that even though the wave function of the universe contains both branches, “I” somehow experience only one of the two branches.
The answer to that I guess if that the two branches are nearly orthogonal they will merrily evolve independent of each other. But somehow “I” only experience only one of them.
Sorry for the rambling. I’m not able to articulate what I don’t understand.
> The future me would end up in one of two branches: I measure spin up, or I measure spin down.
The future "you's" would each see spin up, and spin down, respectively.
We are just as quantum as what we measure. There isn't a scale where entanglement and superposition turn into something else. No classical vs. quantum atoms.
Just as an up-spin qubit touching an up/down qubit results in an up-up qubit pair in superposition with an up-down superposition, conserving the qubit, when we touch a qubit we get "us"-up and "us"-down versions.
No information is created. None is destroyed. We experience a correlation = "collapse" (both versions of us), but the quantum information just continues on as before, qubit conserved.
"Hard problem" makes it out to be much more difficult than it actually is. To simplify things a little bit, if you combine a spatiotemporal sense (a sense of bounded being in space and time) with a general predictive ability (the ability to freely extrapolate in time and space from one's surroundings,) "consciousness" arises necessarily. It's what having such senses feels like from the inside; the first-person view. It's a matter of degree, of course.
The writing of Chalmers and its consequences have been a catastrophe for philosophy.
It's not hard at all when you acknowledge that such senses exist in the world, and that you (like others) possess them. As an aside it tends to foster a certain tendency towards empathy.
In essence, you're asking why there's an inside to being a self-modeling system. But "inside" isn't something extraneous, something additional -- rather, it's what "self-modeling" means.
Really the "hard problem" has a very easy answer, but it's a physical/functional answer, and dualists and obscurantists simply don't like it.
It's embarrassingly silly to say but I've frequently just boiled down the hard question to the question of "where is the experience of the color blue stored in the universe?" Even as a non-dualist, I still haven't found much of an answer that I like. I'm all ears if you've got a book recommendation.
The kneejerk response would be: Are you not conscious at this present moment? If we were to modulate your spatiotemporal senses with drugs or a lobotomy, do you doubt that you would be very differently conscious, or perhaps entirely unconscious?
I mean, there is a credible first-person answer to that question of yours, which each man can answer for himself.
But considered more seriously, the "hard problem" is an artifact of treating experience as a separate thing that needs to be generated. If you accept that self-modeling systems bounded in space and time exist, you've already accepted that experience exists -- because experience is what such a system is, from the inside. There's no second step where experience gets added. The question "why is there experience?" is exactly akin to "Why is there an interior to four walls and a roof?" The interior isn't a separate thing; it's necessarily constitutive.
We have a theory whose plain reading matches experiment at all scales.
Consciousness is something else. It is tempting for humans to pair mysteries up, pyramids and aliens, or whatever. But there isn't any factual basis for linking the experience of self-awareness with quantum mechanics.
Is there a factual reason we know digital minds couldn't be conscious? Where quantum effects have been isolated from the operations of mental activity. That seems like a premature constraint to assume.
Yes, the MWI is falsifiable. It asserts that objective collapse does not occur, therefore any observation of objective collapse (such as predicted by GRW or Penrose-Diosi) would falsify it.
It touches you, and you are just as quantum as the bit.
So two entangled versions of you follow, one entangled with each state. (Actually as many quantum versions of you that touched the qubit times two.)
Which is what happens, as we know from experiment when any one qubit interacts with another independent qubit. We get the product of entangled states, each now correlated. But different entangles states are now in superpostion with each other.
So correlation/entanglement happens and is experienced, despite no collapse of superposition. No information was destroyed or created.
Each of you thinks, wow now the qubit only has one state. But that is because there are two versions of you, correlated respectively with the two uncollapsed qubit states.
Complete conservation. That is the "experience" of collapse that needs no explanation, because it is a predicted experience not requiring an actual collapse. Just as spherical Earth models don't need a special explanation for the appearance of locally flat Earth, because spherical models predict a local flat Earth experience.
I think you're right, the many worlds interpretation makes the most sense. Unfortunately out current technology is very far from delivering any experimental confirmation or denial of any of the mainstream interpretations.
Are the Mysteries of Quantum Mechanics Beginning to Dissolve? I don’t think so.
Zurek’s Decoherence and Quantum Darwinism is thought-provoking, but it’s still speculation without broad buy-in from researchers. We might need ASI to crack these mysteries — our brains weren’t built for this kind of problem.
I think the brains of our stone age ancestors were not built for relativity either. In the end the normal sequence of generations (having children and then die at some point) offers "re-trainings" of the brains. So, besides waiting/hoping for artificial intelligence, we should continue to make (and train) children. Worked great so far.
What we need are tractable experiments to test these theories.
Maybe ASI can help design these. Until it can, it will just be another voice arguing for one position over another on pretty weak arguments. Right now my money would be more on human researchers finding those experiments, but even among those few are even trying
Quite frankly, Quantum is probably known or solved by a nation state (probably the United States). Similar to AI, they will release it in a safe roll out (as they deem it).
Maybe, but the AI we see in the mainstream today -- generative image/video/text creations and Large Language Model chatbots -- were done via non-governmental public and private companies. And a lot of the work hitting the scene loudly and somewhat prematurely. My understanding is the amount of and type of compute needed for Quantum is pretty intense, so there'd be a huge footprint from its manufacturing to keep it hidden.
"Thus the wave function can’t tell us what the quantum system is like before we measure it. "
Nothing is a particle, all measured things are a probability that we make a certainty when we measure them.
When you stop looking at things as things, but instead, see them as probabilities, it will all make sense. My hand and the beer bottle I pick up are both probabilities. Since the mind cannot navigate the world based on probabilities it turns them into certainties.
Physical science is is the only way we can perceive quantum science. There is no "collapse" outside of our brains perception.
It would be interesting if most of our confusion with quantum mechanics came from treating probabilities as independent when they are actually highly correlated. I don’t really know any physics, but I’m familiar with probability and this type of problem seems to be the most common error in interpreting probabilities.
I don't have any skin in the game, but people should be aware of Induction vs Deduction.
Induction had the earth at the center of the solar system and had the best calculations to predict where Mars was. Copernicus said earth was at the center, the equations were simpler, but were worse at predicting the location of planets.(until we figured out they moved in ellipses)
When we say "All swans are white, because I've never seen a black swan." Its probabilistically true. That is induction. If we found swans didn't have the gene to make black feathers, that would be deduction.
Deduction is probably the most true, if it is true. (But it is often 100% wrong)
Induction is always semi true.
Quantum mechanics seems to be in the stage of induction. Particles are like the earth at the center of the solar system. We need a Copernican revolution.
I’m probably way off base and I’m probably missing some insights that I could get by going to school or something but that’s was just my experience with the subject.
A school would usually teach the "shut up (about philosophy) and calculate" approach. These philosophical problems about the meaning of quantum mechanics have been with us for 100 years, and mainstream physics sees them as too hard or even intractable, and thus as waste of time.
The mathematics of QM works extremely well.
The interpretations of what the math is saying happens a varied and sometimes contradictory.
We can predict what's going to happen extremely well, we just can't tell the story of what's happening. And there's been a century of trying to avoid the weirdness and failing. The problem might just be our brains evolved in a world that behaves so much differently that we can't understand.
Does this not imply that there is an asymmetry, one half of the state gets imprinted, the other half neglected? This however also raises the question about the basis, what is a superposition and what is not depends on the choice of basis. Is there a special basis just as pointer states are somehow special?
https://www.jacobbarandes.com/
https://www.cambridge.org/core/books/decoherence-and-quantum...
The author does say the approach is a combination of Copenhagen and MWI, removing the outlandish parts of both. Seems to preserve the randomness of the former though.
Well, duh. It's not like classic objects actually exist, or the classical/quantum divide: everything is quantum, including the "observers". The "classical observer" is a crude approximation that breaks down to a pointy enough question. Just like shorting the perfect battery (with zero internal resistance) with a perfect wire (with zero external resistance) — this scenario is not an approximation of any possible real scenario so it's paradoxicality (infinite current!) is irrelevant.
To call something random doesn't mean it's impossible to model, in fact all sorts of natural facts seemed random one day before being covered by a model. One very relatable example example is the motion of stars in the the night sky, which seemed random for ages, until the Copernican revolution.
The fact we have access to random() function in programming seems to trip many people. random() is a particular model implementation of random, but stuff in nature isn't random().
My point is, using "just random" to do work in any scientific explanation is a clutch.
It does not have to mean something inherently non-deterministic or something that can't be modelled, although it certainly is the case that if something is inherently non-deterministic then it would necessarily have to be modelled randomly. Modelling things as a random process is very useful even in cases where the underlying phenomenon has a fully understood and deterministic model; a simple example of this would be chess. It's an entirely deterministic game with perfect information that is fully understood, but nevertheless all the best chess engines model positions probabilistically and use randomness as part of their search.
The use of "random" as explanation or characterization in science has certainly spanned everything from "we don't know", to "there is inherent indivisible physical randomness".
And I would agree, in the latter case it is a crutch. A postulate that something gets decided by no mechanisms whatsoever (randomness obeying a distribution still leaves the unexplained "choice").
It is remarkable that people still suggest the latter, when the theory, both in theory and experiment, doesn't require a physical choice at all (even if we experience a choice, that experience is explained without the universe making a choice).
https://arxiv.org/abs/2503.15776
In this paper a plurality of physicists stated that they felt that the initial conditions of the universe are brute facts that warrant no further explanation. This is not "our model doesn't yet account for it", it's "there is no explanation to be given".
Hardly. Some philosophers say that. But I don't take much from philosophers reasoning about physics.
The quantum function is the real object. Little balls we like to imagine the particles as are just perception of quantum functions very narrowed down by entangling with macroscopic objects. The way we measure anything is through the entanglement between the measured entity and our macroscopic instruments.
Actually, the "many worlds" "interpretation", simply treats the highly successful equations as meaning what they say.
And it is misnamed. The field equations describe a highly interconnected "web universe" of "tangles" (what I call spans of entangled interactions) and "spangles". (My shorthand for superpositions, i.e. disjoint interactions of particles. Think of all the alternate lines leading from and two distinguishable states, like star patterns.) Basically, a graph of union and intersection relations where all combinations, individually and en masse, are determined exactly by the laws of conservation.
That's an amazingly good property for a theory. And we have it.
By including all consistent versions, no external information is required by the theory. It is informationally complete. A successful objective explanation. With deep experimental support that entanglement and superposition actually exist, because their interactions are easily testable.
In fact, entanglement doesn't "violate" locality, it is the more general case which explains locality. Locality is just tightly coupled entanglement/interaction. Not a fundamental constraint on connections. There is no fundamental "distance", just loose and dense connections. Locality is just what we see wherever there are patterns of dense connections. They are an effect, not a constraint.
Even in the classical world of large (highly tangled) objects, we take it for granted that dependent objects can separate over arbitrarily vast dimensions of space and time and yet return together. If that isn't entanglement over vast distances, what is it? It is a basic property of classical physics. Quantum mechanics reveals more subtlety in those maintained connections, including interactions between connections, but it didn't originate them.
Forces disappear. They become passive in an interesting way. Histories where information cancel, leave structured distribution patterns behind, which to us look like forces. Cancellation is just information being conserved. Not an active force. But the results appear active.
In a similar way to how the evolutionary umbrella seems very smart and creative, when really, it is just poorly adapted individual creatures independently cancelling themselves out blindly, leaving a distributional improvement behind.
There is no additional information needed to explain the effect of quantum "collapse" because it is already explained by the fast bifurcation of disjoint tangles when lots of particles interact in an unorganized manner. It is thermodynamics being thermodynamics.
Anyone attempting to invent a mechanism for "collapse" is like someone trying to explain why the spherical Earth appears "flat" by introducing additional speculative theories. Despite the spherical world theory already explaining why it looks flat locally.
And the only reason to not take the experimentally verified field equations as a plain reading, is the result is "too big" for someone's imagination.
Our everyday experience doesn't limit reality, despite humans having trouble with theories that reveal a bigger reality, over and over and over.
Bluntly: The total field equations preserve information - that is the plain implication and guarantee for having both unions (tangles) and intersections (spangles) of interactions.
Anything else requires a universal firehose of magically appearing information to choose collapses, i.e. particular interactions, in order to explain something already explained. In other words, dressed up voodoo. And by "re-complicating", uh, "re-explaining" the already explained, introduces a ridiculous new puzzle: Where does all that pervasively intrusive relentless injection of information (that determines every single extricable particle interaction!), come from? (Occam is spinning like a particle accelerator in his grave.)
Saying it "Just Happens" is like someone "explaining" their pet version of a creator with "Just Is". It is a psychological non-taulogy for "Don't Ask Questions".
The Stanford Encyclopedia of Philosophy (https://plato.stanford.edu/entries/qm-manyworlds/) goes into this in some depth, and it seems like the right way to think about it is say that "I" in one branch is a different entity than the "I" in a different branch. I have somehow not been able to grok it yet.
And I agree about the naming. I really dislike the name "many worlds interpretation", which seems to imply that we have to postulate the existence of these additional worlds, whereas in fact they are branches of the wavefunction exactly predicted by standard quantum mechanics.
That's quite a serious issues. And arguments against that - like Self-Locating Uncertainty, or Zurek's Envariance - look suspiciously circular if you pull them apart.
There's also the issue that if you don't have a mechanism that constrains probability, you can't say anything about the common mechanism of any of the worlds you're in. Your world may be some kind of lottery-winning statistical freak world which happens to have very unusual properties, and generalising from them is absolutely misleading.
There's no way of testing that, so you end up with something unfalsifiable.
I don’t claim to understand them though. I have tried.
I have no idea what this means.
Is there a bound on anything in reality, in terms of scale? Beyond its own laws?
I am reminded of how often in history, too much time, or too much scale, were unsuccessful arguments against many theories we accept today. Those critiques died without any need for special arguments, because they don't have a logical basis.
Also, there are not a number of many "worlds". That is a reflection of poor naming. There is an interleaving of all interactions, so if you zoom out, a smeared landscape across all configurations, from the plank scale up.
Because the connections involve both intersection (entanglement) and union (alternate paths), we get bifurcation of classical sized paths (dense entanglements), while the individual particles continue unconcerned by how they appear to create different classical histories at large scale.
And yes it is experimentally validated. This is the theory that everyone accepts in the lab, even as larger scales of experiment continue to progress.
But some people have difficulty believing/visualizing that it continues to work at larger scales. Despite no scale limitation in the theory, no scale related violations ever suggested experimentally, and the strong likely that scale limitations would produce new physics in at-scale observations of our cosmos if they did exist.
Pour water down a hill. Water clings to water, and we have hills that already have lots of correlations. We get streams that break up into multiple streams.
How did one stream end up where it is? It seems like a good question, but it is circular. The stream is defined by where it is. You are here (in some circumstance), because the version of you in this circumstance is you.
A transporter accident that creates several versions of you, on several planets with difference colors, doesn't need to explain to each version how they ended up at a planet with their color. Even if for a particular copy, it seems like there should be an answer why they showed up on a planet of a particular specific color. The "why" is just, all paths were taken.
Maybe what I meant was this: if I perform a quantum experiment where the spin measurement of an electron could be spin up or spin down, the future me would end up in one of two branches: I measure spin up, or I measure spin down. There wouldn’t be any possible world where I measure a superposition of spin up and spin down, because such a a state is going to decohere rapidly. This makes sense. What I’m unable to grasp is that even though the wave function of the universe contains both branches, “I” somehow experience only one of the two branches.
The answer to that I guess if that the two branches are nearly orthogonal they will merrily evolve independent of each other. But somehow “I” only experience only one of them.
Sorry for the rambling. I’m not able to articulate what I don’t understand.
> The future me would end up in one of two branches: I measure spin up, or I measure spin down.
The future "you's" would each see spin up, and spin down, respectively.
We are just as quantum as what we measure. There isn't a scale where entanglement and superposition turn into something else. No classical vs. quantum atoms.
Just as an up-spin qubit touching an up/down qubit results in an up-up qubit pair in superposition with an up-down superposition, conserving the qubit, when we touch a qubit we get "us"-up and "us"-down versions.
No information is created. None is destroyed. We experience a correlation = "collapse" (both versions of us), but the quantum information just continues on as before, qubit conserved.
The writing of Chalmers and its consequences have been a catastrophe for philosophy.
The hard problem is that there is such a feeling at all.
In essence, you're asking why there's an inside to being a self-modeling system. But "inside" isn't something extraneous, something additional -- rather, it's what "self-modeling" means.
Really the "hard problem" has a very easy answer, but it's a physical/functional answer, and dualists and obscurantists simply don't like it.
I mean, there is a credible first-person answer to that question of yours, which each man can answer for himself.
But considered more seriously, the "hard problem" is an artifact of treating experience as a separate thing that needs to be generated. If you accept that self-modeling systems bounded in space and time exist, you've already accepted that experience exists -- because experience is what such a system is, from the inside. There's no second step where experience gets added. The question "why is there experience?" is exactly akin to "Why is there an interior to four walls and a roof?" The interior isn't a separate thing; it's necessarily constitutive.
Consciousness is something else. It is tempting for humans to pair mysteries up, pyramids and aliens, or whatever. But there isn't any factual basis for linking the experience of self-awareness with quantum mechanics.
Is there a factual reason we know digital minds couldn't be conscious? Where quantum effects have been isolated from the operations of mental activity. That seems like a premature constraint to assume.
Is it falsifiable?
If you have a theory that seems unassailable by any logic, that's a good signal it is tautological and not very useful.
So two entangled versions of you follow, one entangled with each state. (Actually as many quantum versions of you that touched the qubit times two.)
Which is what happens, as we know from experiment when any one qubit interacts with another independent qubit. We get the product of entangled states, each now correlated. But different entangles states are now in superpostion with each other.
So correlation/entanglement happens and is experienced, despite no collapse of superposition. No information was destroyed or created.
Each of you thinks, wow now the qubit only has one state. But that is because there are two versions of you, correlated respectively with the two uncollapsed qubit states.
Complete conservation. That is the "experience" of collapse that needs no explanation, because it is a predicted experience not requiring an actual collapse. Just as spherical Earth models don't need a special explanation for the appearance of locally flat Earth, because spherical models predict a local flat Earth experience.
Zurek’s Decoherence and Quantum Darwinism is thought-provoking, but it’s still speculation without broad buy-in from researchers. We might need ASI to crack these mysteries — our brains weren’t built for this kind of problem.
Maybe ASI can help design these. Until it can, it will just be another voice arguing for one position over another on pretty weak arguments. Right now my money would be more on human researchers finding those experiments, but even among those few are even trying
Nothing is a particle, all measured things are a probability that we make a certainty when we measure them.
When you stop looking at things as things, but instead, see them as probabilities, it will all make sense. My hand and the beer bottle I pick up are both probabilities. Since the mind cannot navigate the world based on probabilities it turns them into certainties.
Physical science is is the only way we can perceive quantum science. There is no "collapse" outside of our brains perception.
Induction had the earth at the center of the solar system and had the best calculations to predict where Mars was. Copernicus said earth was at the center, the equations were simpler, but were worse at predicting the location of planets.(until we figured out they moved in ellipses)
When we say "All swans are white, because I've never seen a black swan." Its probabilistically true. That is induction. If we found swans didn't have the gene to make black feathers, that would be deduction.
Deduction is probably the most true, if it is true. (But it is often 100% wrong)
Induction is always semi true.
Quantum mechanics seems to be in the stage of induction. Particles are like the earth at the center of the solar system. We need a Copernican revolution.