I’ve recently read another new popular book about quantum mechanics, Quantum Strangeness by George Greenstein. Before getting to saying something about the book, I need to get something off my chest: what’s all this nonsense about Bell’s theorem and supposed non-locality?
If I go to the Scholarpedia entry for Bell’s theorem, I’m told that:
Bell’s theorem asserts that if certain predictions of quantum theory are correct then our world is non-local.
but I don’t see this at all. As far as I can tell, for all the experiments that come up in discussions of Bell’s theorem, if you do a local measurement you get a local result, and only if you do a non-local measurement can you get a non-local result. Yes, Bell’s theorem tells you that if you try and replace the extremely simple quantum mechanical description of a spin 1/2 degree of freedom by a vastly more complicated and ugly description, it’s going to have to be non-local. But why would you want to do that anyway?
The Greenstein book is short, the author’s very personal take on the usual Bell’s inequality story, which you can read about many other places in great detail. What I like about the book though is the last part, in which the author has, at 11 am on Friday, July 10, 2015, an “Epiphany”. He realizes that his problem is that he had not been keeping separate two distinct things: the quantum mechanical description of a system, and the every-day description of physical objects in terms of approximate classical notions.
“How can a thing be in two places at once?” I had asked – but buried within that question is an assumption, the assumption that a thing can be in one place at once. That is an example of doublethink, of importing into the world of quantum mechanics our normal conception of reality – for the location of an object is a hidden variable, a property of the object … and the new science of experimental metaphysics has taught us that hidden variables do not exist.
I think here Greenstein does an excellent job of pointing to the main source of confusion in “interpretations” of quantum mechanics. Given a simple QM system (say a fixed spin 1/2 degree of freedom, a vector in C2), people want to argue about the relation of the QM state of the system to measurement results which can be expressed in classical terms (does the system move one way or the other in a classical magnetic field?) . But there is no relation at all between the two things until you couple your simple QM system to another (hugely complicated) system (the measurement device + environment). You will only get non-locality if you couple to a non-local such system. The interesting discussion generated by an earlier posting left me increasingly suspicious that the mystery of how probability comes into things is much like the “mystery” of non-locality in the Bell’s inequality experiment. Probability comes in because you only have a probabilistic (density matrix) description of the measurement device + environment.
For some other QM related links:
- Arnold Neumaier has posted a newer article about his “thermal interpretation” of quantum mechanics. He also has another interesting preprint, relating quantum mechanics to what he calls “coherent spaces”.
- Philip Ball at Quanta magazine explains a recent experiment that demonstrates some of the subtleties that occur in the quantum mechanical description of a transition between energy eigenstates (as opposed to the unrealistic cartoon of a “quantum jump”).
- There’s a relatively new John Bell Institute for the Foundations of Physics. I fear though that the kinds of “foundations” of interest to the organizers seem rather orthogonal to the “foundations” that most interest me.
- If you are really sympathetic to Einstein’s objections to quantum mechanics, and you have a lot of excess cash, you could bid tomorrow at Christie’s for some of Einstein’s letters on the topic, for instance this one.
That’s because Bell uses a notion of “locality” that very few people with a background in GR/QFT can relate to. It’s sometimes called “Bell locality” more specifically, and one can reasonably question whether it has anything to do with “locality” in the sense we’re used to. You can have eternal arguments about this with philosophers but it’s not particularly enlightening. It’s just a definition.
Bell proved non-locality for hidden variable theories of quantum mechanics – not in a general formulation. But many believe that it is generally true (many including Bell). This paper, in the introduction seems to me to be clearly a clearly written description:
https://arxiv.org/pdf/quant-ph/0408105.pdf
Peter, by a “non-local measurement” do you mean that there is another observer B far away from you, and you regard his action as a quantum mechanical “measurement”?
A way to get rid of the spooky action at a distance is to assume just one observer, A. He performs measurements with his eyes and ears. Everything else, including all other humans, is either a part of the measuring apparatus which A utilizes, or a part of the physical system under the study.
If A and B measure the spins of a pair of entangled particles, then the only real quantum mechanical “measurements” are the events when A reads the gauge of his own spin measuring apparatus, and when A reads an email sent by B.
That quantum theory is really nonlocal can best be seen by reading Bells article: The Theory of Local Beables. It is by no means a question of strange definitions or philosophical opinions.
Dear Peter,
The wikipedia is basically right on this, even though “our world” is too strong.
Replace “our world” by “all theories that fulfill certain
seemingly reasonable assumptions”.
> only if you do a non-local measurement can you get a non-local result.
“non-local measurement” = measurement on a system
in a spatially entangled state at space-like distance.
“non-local result” = even though the results are random,
they are correlated. QM describes no common cause for this
correlation –> QM describes its establishment as a non-local influence.
> Yes, Bell’s theorem tells you that if you try and replace the extremely simple >quantum mechanical description of a spin 1/2
> degree of freedom by a vastly more complicated and ugly description, it’s going
> to have to be non-local. But why would you
>want to do that anyway?
To avoid the above conclusion (the –>) by providing a common cause for the correlation. Under certain seemingly reasonable assumptions the Bell theorem shows that every theory that provides a common cause for the correlations cannot reproduce QM.
In my experience (in particular from my interactions of our local Bohmians), this insistence on “non-locality” comes from a prejudice about how classical the world should be in the form of “realism”, i.e. the assumption that a system has to have a state that even specifies properties that are not only not measured but in fact cannot be measured (as they are incompatible with properties are are in fact measured). For example assuming that the x-component has some (unknown) value if in fact the z-component is measured. Violation of the Bell inequality only implies that not both locality and realism can hold in quantum theory and it is your choice which to drop.
My personal choice is to maintain locality as this is the foundation of QFT (or field theory which was invented to have a local field equation rather than a non-local force law in Newtonian gravity or electro-statics) and Haag’s book “Local quantum physics” makes this point most prominently.
For this point of view applied to foundations, my recommendation is to watch the video of Sidney Colman’s colloquium “Quantum Mechanics In Your Face” or the exchange that Reinhard Werner had with the Bohmian people that is well documented on the unterwebs.
The story of Bell’s inequality has subtleties that are often overlooked in popular retelling. But the most obvious point concerns the status of the wavefunction or state vector in quantum mechanics. ‘Spooky action at a distance’ is only a problem if it is assumed that the wavefunction represents the real physical state of a real physical quantum system. In this interpretation there are things that the theory doesn’t appear to account for – the theory is incomplete. This was Einstein’s view. One way of completing the theory is to assume that the wavefunction is statistical in nature, governed by the behaviour of some underlying hidden variables. A certain choice of variables in principle allows all quantum events to be local, leading to Bell’s inequality. Another choice is crypto-non local, leading to Leggett’s inequality. What these inequalities show is that no local or crypto-non local theory can accurately predict all the results of regular quantum mechanics. And experiment is pretty unequivocal – these hidden variable theories can’t be right.
If you want to press on and insist the wavefunction is real, then you face a choice between unpalatable evils, such as de Broglie-Bohm theory (in which spooky action at a distance is accepted as part of the representation), ad hoc physical collapse mechanisms (such as GRW), consciousness-causes-collapse mechanisms (Von Neumann and Wigner), and the many worlds interpretation.
Of course, you could instead assume that the wavefunction does not represent the real physical state, but rather codes for our knowledge of the system based on experience. Then all the bizarre, spooky stuff goes away, and there is no non-locality. But by making this trade-off, we lose any ability to understand what’s *really* going on at the quantum level. Like emergency services personnel at the scene of a tragic accident, such anti-realist interpretations advise us to ‘move along’, because there’s ‘nothing to see here’.
I used to favour Einstein’s realism. But the experiments ruling out especially crypto-non local hidden variable theories caused me to re-think. Like the great philosopher Han Solo, I’ve got a very bad feeling about this.
Peter,
“for all the experiments that come up in discussions of Bell’s theorem, if you do a local measurement you get a local result, and only if you do a non-local measurement can you get a non-local result.”
There are usually two main reasons people are bothered by Bell’s theorem. The first is that it excludes “local realism”, so if you want to keep locality you have to give up realism, which some people have trouble doing. The second (and IMO more important) is that the violation of Bell’s inequalities requires one to give up the metaphysical idea of reductionism — studying the parts of a physical system does not tell you everything there is to know about it, or in other words, the whole is more than a sum of its parts. Apparently people have trouble giving up reductionism as much as realism, so they are all “baffled” by nonlocality.
Bee,
“one can reasonably question whether it has anything to do with “locality” in the sense we’re used to.”
If you look at the dBB interpretation of QM, there is an explicit nonlocal interaction term in the Hamiltonian, whose sole purpose is to make the classical EoMs give predictions which are equivalent to QM predictions (including the violation of Bell’s inequalities). Thus, one can argue that Bell’s nonlocality can be seen as a form of nonlocal interaction, and cast into a usual language of a nonlocal Lagrangian.
Best, 🙂
Marko
Bell non-locality is a subset of DBB non-locality. Bell locality is restricted to correlation only. DBB explicitly violates relativity by transferring information faster than light though no observer can access this to manipulate it to alter the past. Philosophers who don’t make this distinction are no philosophers. But even physicists throw the word nonlocal around too loosely.
Sort of along the lines of what Marko says above, I thought the problem (forgive me if I screw up) is that, given QM accurately predicts probabilities of outcomes at spacelike separation that violate Bell’s Inequality, giving up “local realism” is obligatory.
The problem is further exacerbated by lack of agreement about whether or not “local realism” is a valid requirement for a sensible interpretation to begin with. Maybe it’s not universally clear what “local realism” is even supposed to mean.
Seems completely hopeless, if you ask me. “Shut up and calculate” wins again.
I’m realizing that one thing that I really dislike about this subject is the hijacking of words (e.g. “realism” and “locality”) to suit a particular agenda, the agenda that the fundamental QM formalism must be replaced because of the trickiness of relating this formalism to the classical formalism and our everyday intuitions. The words “realism” and “locality” get defined in a way designed to make QM non-“realistic” and non-“local” and thus problematic.
I think Sabine Hossenfelder is right, that arguing about the definition of “locality” will lead nowhere. To just make a bit clearer what I mean when I say Bell’s inequality experiments are “non-local”, take a look at the diagram of an example of such an experiment on Wikipedia
https://en.wikipedia.org/wiki/File:Bell-test-photon-analyer.png
What I’m referring to is just that one side or the other of this experimental set-up could be thought of as “local”, putting the two sides together as one measurement apparatus is very “non-local”. I don’t see the lesson as “reality is non-local”, see it as just a reflection of the measurement being a non-local physical process.
About “realism”, I just don’t see why it’s a good idea to define “real” as “my human-scale classical model of the things I perceive” vs. “the best model we have from science of the physical world”. My ontological commitments are based on what I know of science, which tells me the QM description of “physical reality” is the best-founded one. When Dr. Johnson refuted idealism by kicking a rock, the reality of the rock is best thought of not as “massive object with these coordinates”, but “complicated many-body quantum system”.
What would a non-local result to a local measurement look like?
All of these arguments for non-locality are based on some sort of philosophical argument that the world must be described by hidden variables (like a classical theory), and then applying Bell’s theorem and the Bell test experiments.
Sometimes the philosophical argument is based on analogies to classical mechanics, or on beables, or on reductionism, or on completeness. These are all just word games to disguise Bell’s local hidden variable assumption. If you drop that assumption, then quantum mechanics is a local theory.
Peter,
“I don’t see the lesson as “reality is non-local”, see it as just a reflection of the measurement being a non-local physical process.”
The only way to test whether reality is local or non-local is to perform a non-local measurement, since a local measurement is not sensitive to non-local effects. The fact that the experimental setup is non-local should therefore not be surprising. What is surprising is the *outcome* of that measurement, which suggests that reality is also non-local. This outcome is *not* an automatic consequence of the fact that the experimental setup is non-local, since in principle one could have also obtained a different result.
The definitions of locality and realism in Bell’s theorem are not “hijacked” IMO, rather they are merely reformulations of what we traditionally and intuitively regard as locality and realism.
Informally speaking, locality is the statement that choices you make here and now are not correlated with the outcomes of measurements performed outside your past and future light cones. Realism is the statement that a physical system has a well-defined value of an observable even before one measures it. While there are all sorts of issues with the meanings of the words like “choice” and “measurement”, I don’t really see that these definitions are hijacking any otherwise traditional meaning of the notions of realism and locality.
🙂
Marko
Dear Peter,
I believe your confusion is caused by the existence of several different versions of Bell’s theorem, which often gets people talking past each other. You are right if you are talking about Bell’s 1964 version (or rather CHSH’s 1969 version of Bell’s 1964 version), and Scholarpedia is right if they’re talking about Bell’s 1975 version.
I wrote about it here, maybe that can clear up something.
The root problem is that the word “locality” is rather overused, and people mean different things by it.
Marko,
“Realism is the statement that a physical system has a well-defined value of an observable even before one measures it.”
I’m afraid that’s exactly what I mean by a hijacked definition, constructed specifically to generate a conflict with QM, a fundamental principle of which is that systems don’t have simultaneous well-defined values for non-commuting observables. I don’t at all see why I need to accept that definition. Looking at an “authoritative” source like Wikipedia
https://en.wikipedia.org/wiki/Philosophical_realism
shows a wide range of definitions of “realism”, with even the one about QM not the one you give. Of those, I would think the most relevant to scientists would be “scientific realism”, described as
“the world described by science is the real world, as it is, independent of what we might take it to be”
In this context, “science” (e.g. QM) is our best model of the real world, and the state of a physical system as given by QM is our best version of what “reality” is.
Mateus Araújo,
Thanks for the references. The problem though is that I continue to not see why there’s anything surprising or problematic about the phenomena Bell is discussing, and why they are supposed to conflict with my understanding of reality or locality. So, it’s hard to get motivated to slog through long discussions of different versions of these claims.
So how much value does the fact that interesting physics was discussed in it add to one of Einstein’s letters, and how much are you paying just for a letter in his handwriting?
Peter,
Unfortunately, what we can call the “John Bell camp” in the debate over the meaning of Bell Inequalities (because John Bell is in that camp) has been habitually misunderstood. For one, this camp is *not* motivated by a desire to impose classical categories (like particle position and momentum) on quantum objects. This camp is just fine with the idea that quantum objects can have different attributes that do not appear anywhere in classical physics–entirely new attributes subject to an entirely new dynamics as well. The sui generis-ness of quantum mechanics is a red herring in this debate.
Bell’s critics take a wrong turn at the very beginning. They make what is almost a category error: explaining how quantum objects defy certain classical assumptions (sometimes “realism,” sometimes “determinism,” sometimes “counter-factual definiteness”, etc.) and so the quantum objects are able to evade Bell’s reductio ad non-localitum. The category error is that Bell’s argument does not mention, does not rely on, does not implicitly refer to objects or theories being “quantum” vs. “non-quantum,” or “realistic” vs. “non-realistic” (whatever that might mean).
The first thing you have to do to grasp the meaning of Bell Inequalities is to forget the word “quantum,” and follow Bell as he imagines a very generic kind of system–totally agnostic as to whether its underlying dynamics is quantum, classical, stringy, wiccan, vegan, whatever. We only assume that the system has a few generic properties, like that you can identify two sub-systems at distinct locations A and B, that each sub-system has some kind of device with a binary read-out you can see (e.g., a screen flashes or it doesn’t flash), and that each device has some kind of two-setting gadget you can set.
Now Bell makes one critical assumption–the most obvious, straightforward implication of locality in terms of conditional probability distributions for the flashes on the A side given (i) the settings on both sides and (ii) the flashes on the B side. From this assumption, you go straight to the Bell Inequality. Violating the inequality means that the setting on one side somehow influenced the flash on the other side–a manifestly non-local effect, if the experiment is set up right.
One can imagine someone claiming that even though quantum systems violate this locality condition, they do not in fact exhibit non-local influences, because of some really subtle way quantum mechanics undermines the meaningfulness of Bell’s formulation of locality. Fine. Then do that; make that claim; spell out *how* that could be the case; don’t just insinuate it. No one ever spells it out, no matter how much background they have with QFT/GR. An august tradition started by Bohr, the patron saint of not spelling it out.
Peter,
With regards to reality, I’m with you. It makes my blood boil to see people continuously trying to reduce “realism” to determinism, as Marko is doing. This is a hijacked definition. What should we conclude then, that the world is not real? Bell’s theorem proves that solipsism is true? Come on.
As for locality, though, I disagree. I think Bell’s definition of locality is a compelling one (that the probability of an event can only depend on events on its past light cone), and his demonstration that if fails for quantum mechanics is both surprising and problematic.
Since quantum mechanics was created people have noticed that there is problem with locality, because of the wavefunction collapse. Most famously we have Einstein’s 1927 and 1935 arguments. Now if you buy Einstein 1935, that we either have hidden variables or nonlocality, then Bell 1964 is a proof of nonlocality, as it shows that hidden variables cannot do the job. The argument is still messy though, not the least because of three decades in between, and that’s why I like Bell 1975 so much: self-contained, clear, and concise.
Peter,
I wasn’t trying to give a rigorous definition of realism, or debate wikipedia, I just wanted to convey the idea, informally. Let me put it this way — how do you distinguish between the “real world” (which you observe while awake) and the “dream world” (which you observe while asleep and dreaming)? Assuming that everything you can know about either world is limited to what you can observe (and side-stepping 5000 years of philosophy on the topic), the only conclusion is that the real world cannot really be operationally distinguished from the dream world.
So if you want the real world to be “really real”, you have to subscribe to a metaphysical postulate that the real world, by definition, has at least some properties that are *independent* of the fact that you are observing them (otherwise you run into solipsism, as Mateus noted). And this is basically the same as the statement of realism I formulated previously.
Note that this definition does not require *all* observables to be simultaneously well-defined, so there is no problem with noncommutativity of observables. Also note that I am not really advocating for realism, I’m just explaining how I understand it.
Finally, claiming that “reality” is whatever your best theoretical description says it is — runs the risk of confusing the map with the territory. Our formal descriptions may change, while “reality” doesn’t. For example, a century ago you could say that QM defines reality, but today we know that QM is wrong (particle creation and stuff), and that instead it is QFT which defines reality. And it is likely that QFT will also be eventually substituted with QG of some sort or whatever else. Our descriptions of the real world are epistemological, while the real world itself is (arguably) an ontological notion.
Mateus,
I’m confused, why do you say that realism is the same as determinism? The way I see it, realism is a statement about objectivity of existence of stuff, while determinism is a statement about its evolution (or predictability, or computability, or however you want to frame it). I don’t see the two being equivalent in any way, since in principle one can think of a model which obeys realism but not determinism.
🙂
Marko
Mateus Araújo,
I guess what I’m not understanding is how to reconcile
1. the microcausality of relativistic QFT, so presumably of the SM, our best model of physical reality
2. claims that observed violations of Bell’s inequality imply influence outside the light cone.
This would seem to imply that while LHC physicists are desperately and fruitlessly looking for violations of the SM, atomic physics experimentalists are all the time looking at huge violations. This doesn’t make sense. The problem has to be that a notion of “measurement” is being invoked that is in violation of the idea that a measurement apparatus is just another physical system. I understand that one possibility is that “measurement” does involve new physics, but I’ve never seen a plausible conjecture for what this new physics is, and why it is mysteriously inaccessible to usual modeling + experimental study, given that it is supposedly happening in so many experimental situations.
Peter,
The key assumption you’re making is that (R)QFT/SM is locally causal. In Bell’s sense, the SM is *not* locally causal.
The commutation of space-like separated field operators in the SM is something akin to local causality, but it is a weaker condition. This commutation-locality condition shows that we cannot *controllably* send faster-than-light signals. But commutation-locality does not preclude the existence of certain faster-than-light influences, which are not controllable by us.
The situation here with space-like separated field operators is just like the spin operators for the two particles in a Bell experiment. What Bell has shown is that we can make certain kinds of repeated measurements of pairs of spin operators (one for the particle on each side of the experiment), such that the two operators commute with each other in each measurement (because they are operators corresponding to two different particles), but nonetheless the total record of repeated measurements will statistically imply a faster-than-light causal influence between the sides. The nature of this influence just does not enable an experimenter on one side to *control* the measurement results on the other side, hence to send a signal.
What this shows is that the standard notion of locality in QFT (vanishing commutators) is really just a necessary but not a sufficient condition for locality. The SM, just like non-relativistic QM, is fundamentally non-local–as Bell showed it must be to reproduce Bell experiment results described by non-relativistic QM.
Eric Dennis,
I’m surprised to hear that the last few decades when I thought I was studying local quantum field theory, I really was studying non-local quantum field theory…
OK, fine, you don’t like the term “realism” as it means something else in other context. Call it something else “definitsm” seems to be freely available. What I mean is that it makes sense to argue about the value of something that is inherently unmeasurable like giving it a hypothetical value or talking about the probability of it having a value.
In the case of the original Bell paper, that is having P(a, b) (where a and b are vectors denoting the orientation of the detector) and P(a, c) in one equation where b and c describe incompatible measurements is the original sin.
Or put more abstractly: For a non-commutative algebra of observables, Bell’s inequality (or better: its violation) shows that the space of states (the Bloch sphere in the case of a qubit) is not that of probability distributions on some space as it is not a simplex (where each state can be decomposed into pure states in which all events have probability either 0 or 1).
Speaking of definitions of terms: I think for these kinds of discussions it is important to use a definition of “locality” that does not render the classical theory “non-local”. Of course, even classically, there can be (probabilistic) states that have correlations and thus even there states are global objects. Reading about the lottery numbers in one newspaper does not “uncontrollably influence” which numbers are printed in all the other news papers. It’s just a correlation.
I strongly believe that as long as one sticks to statements that actually have observational meaning (thus excluding equations with P(a, b) and P(a, c) which not only are counter factual as nobody actually did the observation but have to as it is impossible to measure both) as well as the only thing worth worrying about is wether doing stuff here can instantaneously influence outcomes there (no, it cannot even in a quantum world, it could have only influenced a measurement that is impossible to do) then everything is fine. Only thing that has to go is the prejudice that all states are described by probability distributions (of what I would call “classical” outcomes in the 0/1 sense above of assertions that are either true or false).
Finally, with respect to reductionism: I think the strangeness of the quantum world is that the opposite is true: If you have full information about the whole (i.e. it is in a pure state described by a vector in Hilbert space rather than a density matrix, but if you don’t like that language, you can equivalently say that you cannot gain more information by decomposing the states into a mixture of other states), entanglement means that you don’t necessarily have full information about its parts: That is, even though the state of the whole is pure, the (marginalised) states of the parts can be mixed. Said more colloquially: Knowing everything about the whole does not mean you know everything about its parts (whereas the usual criticism of reductionism is that that knowing everything about the parts does not necessarily tell you everything about the whole).
Marko,
What I’m saying is that your definition of “realism” reduces it to determinism. Since you agree that these are two different concepts that shouldn’t be mixed, you should think again about using this as your definition of realism: “Realism is the statement that a physical system has a well-defined value of an observable even before one measures it.”
Peter,
That’s precisely why we have a problem. Apparently both 1 and 2 are true and contradict each other. But if you are asking about the solution of the problem, there are hundreds of opinions.
I can offer my own: note that Lorenz-covariance of QFT only holds for the unitary part of the theory, as measurements (with wavefunction collapse) obviously break Lorenz-covariance. Note also that Bell’s observation that quantum mechanics is nonlocal (in the sense that the probability of an event depends on events outside its past light cone) depends crucially on making measurements and collapsing wavefunctions. This has two consequences: if you accept the standard description of measurement as valid, then there is no contradiction between 1 and 2, as QFT/SM breaks relativity anyway. On the other hand, if you accept that wavefunction collapse is not real, there is again no contradiction between 1 and 2, as Bell nonlocality does not exist.
Mateus Araújo ,
“if you accept that wavefunction collapse is not real, there is again no contradiction between 1 and 2, as Bell nonlocality does not exist.”
Since my point of view is that “wavefunction collapse” is just a crude approximation to what actually happens in a measurement process, that you really need to properly study the physics of the situation, treating the “measurement apparatus” + environment as quantum mechanical (not classical) systems, presumably the “nonlocality” disappears if you do this.
Peter,
“I’m surprised to hear that the last few decades when I thought I was studying local quantum field theory, I really was studying non-local quantum field theory…”
Would it be of any help to see an explicit calculation of the violation of Bell’s inequalities within the formalism of QFT? If yes, you may want to take a look at arXiv:1309.2059. They do not discuss what is local or nonlocal (their emphasis is on velocity-dependence), but an explicit QFT calculation may nevertheless shine some light on it?
Mateus,
“What I’m saying is that your definition of “realism” reduces it to determinism.”
Care to elaborate how this reduction happens? It isn’t obvious (to me, at least). While I believe that one could arguably bend over backwards and twist the meaning of the word “determinism” enough to fit your statement, I don’t think it is true for the determinism in the usual sense of, say, Laplace’s demon.
Best, 🙂
Marko
Forgive what might be some silly questions. I’m trying to get to the heart of the matter expediently using (I know, I know) words.
There are clearly a wide range of reactions to the fact that, in Bell-type experiments, nature violates Bell’s (or CHSH’s, or whoever’s) inequality in a particular way. At the extremes are “Yeah? So?” and “GiH! Spukhafte Fernwirkung!” The debate between the two often looks to me like an intervention, with people yelling “I don’t have a problem! YOU’RE the one with the problem!” back and forth.
So great, (literally) adjust your expectations for “reality”. What should leave us perplexed, then, is not the correlation between quantum coin flips, but classical ones.
Is one therefore sweeping the “mystery” under the “emergence” rug and blaming evolution for our obtuseness, per usual?
Peter/Mateus,
It’s not helpful to talk about QM (or QFT) here apart from some account of measurement, essentially bringing us back to 1925. The whole problem is that any theory that can account for actual measurement results (including Bell experiments) is, by Bell’s reasoning, necessarily non-local. Indeed any particular such theory ever conceived (wavefunction collapse, hidden variables, spontaneous collapse, etc.) is manifestly non-local, whether in the context of simple QM or QFT.
It simply won’t do to ignore Bell’s result and say “Well, I don’t believe in any of those theories, they’re at best approximations, so there’s no problem here.”
I realize it is jarring to call Relativistic QFT (inclusive of measurements) non-local. First, this is not my idiosyncratic view. It is Bell’s own view. See his paper “The Theory of Local Beables”:
https://cds.cern.ch/record/980036/files/197508125.pdf
And it’s a view that has been gaining popularity more recently among people in QM foundations. Second, imagine that the field of QM foundations, for most of the period from the 1920’s onward, had been in a somewhat broken state, similar to the current state of string theory (viewed as a TOE).
There was never anything wrong with physicists arriving at something like the collapse view and simply confessing that such a thing didn’t make much sense when taken literally, but we’ll just leave it here as a placeholder until people figure out what’s really going on. Unfortunately that’s not what happened in the 1920/30’s. What happened were megalomaniacal pronouncements about QM constituting a philosophical revolution in our understanding of the relationship between existence and consciousness. We all know the embarassing quotes from Bohr, Heisenberg, Born, Jordan, etc.
When careful thinkers started poking around and asking questions (de Broglie, Einstein, Schrodinger, Bohm, Bell, Grete Hermann) they were dismissed as reactionaries, or simply misunderstood, not refuted. The early foundational mess crystallized inside the culture of physics and has been causing problems ever since. Given all of this, a pervasive misunderstanding about a somewhat subtle (and philosophically radioactive) distinction between the concepts of signalling locality and causal locality is not that shocking.
To my way of thinking, there are three basic levels of discourse, quantum Fact, quantum Theory and quantum Reality. The quantum Facts (exoeriments) are all local, both by actual experiment and by calculation. Quantum Theory seems to be non-local: when Alice makes a measurement Bob’s distant wavefunction collapses instantaneously. What about quantum Reality?
Bell’s achievement is that not only did he focus on Reality, considered by most philosophers to be inaccessible to mind, but he actually proved sonething about the nature of Reality.
When Bob and Alice share a pair of entangled photons, each makes a measurement on a sequence 1, 2, 3…N photons and gets certain results.
Consider one of these results, say result #36. Bell asks the question, what were the causes of Bob’s outcome for this single instance? What does nature need to know to produce this event? To answer this question one must consider for event #36, not only the event that actually happened but all possible events for all possible Bob and Alice settings. All of these events exceot one are contrafactual, but nature must be ready to produce a definite result for any of these possible choices.
Bell then considers what might be happening at Bob’s detector to produce Bob’s event,
making no assumptions about realism, determinism, but only that whatever causes nature uses to construct this outcome, the setting of Alice’s detector is not one of these causes. Bell then shows that any model of reality (how single Bob events are produced) with this restriction cannot reproduce the quantum Facts.
The “realism” assumption to my way of thinking is the notion that it makes sense to consider not only the one result #36 that actually happened, but events# 36 that could have happened for different detector settings. How you feel about Bell’s Theorem then would depend on how you feel about this “contrafactual definiteness” assumption. Is it “classical”, “metaphysical” or simply one natural way to think about quantum causality?
To expand on Bee’s comment:
https://plato.stanford.edu/entries/bell-theorem/
“The principal condition used to derive Bell inequalities is a condition that may be called Bell locality, or factorizability. It is, roughly, the condition that any correlations between distant events be explicable in local terms, as due to states of affairs at the common source of the particles upon which the experiments are performed. See section 3.1 for a more careful statement.”
Section 3.1 is too long and dense to meaningfully excerpt here, as an appetizer: “The condition F of factorizability is the application, to the particular set-up of Bell-type experiments, of Bell’s Principle of Local Causality.”
https://plato.stanford.edu/entries/bell-theorem/#LocaCausAssu
vmarko,
I’m not convinced that the problem lies with the QFT vs. QM treatment of polarization states, I think it’s with the coupling to a “measurement apparatus”.
vmarko/Eric Dennis
What I keep trying to point out is that what is getting ignored here is the actual physics of what happens when you take a spin 1/2 particle and “measure spin up or down”, or “collapse the wavefunction”. You’re ignoring the actual physics and using an extremely crude, non-local approximation to this physics, and then saying that it implies fundamental physics is non-local.
Eric Dennis,
You accuse Bohr et al. of engaging in mystification based on the crude “collapse” model of measurement, and there was plenty of that back in the 20s and 30s. From my reading, there was also a lot of much more nuanced discussion from them, well aware that the problem was the difficult one of emergence of classical behavior, but that they lacked the tools to analyze that so had to set that problem aside. That was nearly a century ago, the problem I see now is that Bell and those following him are now doing exactly what you accuse Bohr et al. of doing: making dramatic claims based on the problems with the obviously flawed “collapse” model.
Physics is essentially local. When people talk about non-locality in the context of the EPR experiment “it’s a matter of giving a dog a bad name …”, according to Gell-Mann.
https://www.youtube.com/watch?v=gNAw-xXCcM8
Peter,
Permit me to create a stylized version, for purposes of clarity, indicating the pattern of many, many exchanges between the Bell camp and its critics…
Bell: Here’s an extremely general theorem showing why *any* physical theory whatsoever that reproduces certain well-tested experimental results (specified in macro-level not micro-level terms) must be non-local.
Critic: Spacelike commutators vanish in QFT, so it’s local, so it must evade your theorem.
Bell: QFT is not local. Spacelike commutators vanish, but this only ensures signalling locality, not full causal locality. The exact equivalent of spacelike commutators also vanish in a Bell experiment. My very general theorem shows how, despite vanishing commutators, the choice of device setting at A must influence the same-time output at B to match the observed results.
Critic: Let’s not get too philosophical about defining locality.
Bell: OK, to be concrete, every specific, well-defined model ever conceived (collapse, hidden variables, etc.) that accounts for the measurement results is manifestly non-local. Exactly as expected from my extremely general theorem that doesn’t refer to or rely on any theory-specific detail (like collapse, hidden variables, etc.).
Critic: Aha! Your theorem assumes the old-fashioned idea of wavefunction collapse, which we all know isn’t rigorous or exact.
Eric Dennis,
This has degenerated into the exactly what Sabine Hossenfelder warned would happen in the first comment, a useless argument over what “non-locality” means. I recommend the Gell-Mann video in the previous comment, where he makes essentially the same complaint I’m making about hijacking definitions in order to try and argue that there must be something wrong with QM.
Peter,
The collapse postulate is also present in QFT, only hidden inside the LSZ formula. But if you are against using the collapse postulate to describe measurements, then your main problem is neither non-locality nor Bell’s theorem, but rather the measurement problem of QM (of course, it remains a problem even if you do use the collapse postulate). In particular, you need some mechanism to explain why we observe a *single* outcome for every individual run of an experiment (say, a scattering process in LHC).
Nobody has a satisfactory answer to that, and I agree that the collapse postulate may look as an ugly phenomenological patch, rather than a fundamental part of a theory. But however you turn it, both QM and QFT are in this sense incomplete, and you need *some* additional postulate to define what a measurement is (or why we see single outcomes).
And in light of Bell’s theorem, I don’t believe that this additional postulate (however you choose to define it) is going to turn a non-local theory into a local one. You’ll be just “kicking the can” further down the formalism, so to say.
🙂
Marko
vmarko,
Yes, the measurement problem for me is the main problem. If you look at the Gell-Mann video mentioned above, his take on Bell’s “non-locality problem” is that it’s making the mistake of mixing things on “different branches”, so he seems to be saying that it’s the measurement problem.
As for the “why the single outcome” aspect of that problem, one thing that has always struck me is that we’re typically DEFINING a “measurement apparatus” as something that behaves in such a way as to produce a single classical outcome. But that problem I think is much more subtle (and discussion of it left to a separate occasion) than the “non-locality” problem here, which just seems to be a red herring.
Peter Woit wrote:
”
I fear though that the kinds of “foundations” of interest to the organizers seem rather orthogonal to the “foundations” that most interest me.
”
Why should that be something to fear? The world of physics is making very little progress at the moment (40 years). One thing is certain: if everyone got their way on what to not to study, there would be nothing available to study.
Thomas Andersen,
That was a purely personal comment. I’m quite interested in the question of foundations of quantum theory so glad to see that a new group has formed to address the issue, just would be a lot happier if they seemed more interested in the questions that to me seem relevant. I don’t think there’s much danger at the moment of my views of what is interesting dominating the subject and driving out others…
Three comments.
First: quantum mechanics doesn’t just break classical probability theory (as Bell demonstrated); it breaks classical computational complexity theory and classical information theory as well. This is why there are a number of computer scientists who are convinced that quantum computers can’t possibly work.
Second: I think this quote by Feynman is very relevant to this discussion: “I am going to tell you what nature behaves like. If you will simply admit that maybe she does behave like this, you will find her a delightful, entrancing thing. Do not keep saying to yourself, if you can possibly avoid it, ‘But how can it be like that?’ because you will get ‘down the drain’, into a blind alley from which nobody has yet escaped. Nobody knows how it can be like that.” He said this in 1964; I think it’s still good advice today.
Third: Feynman didn’t actually follow his own advice. Around 1980, I saw him give a lecture at Caltech about negative probabilities, where he explained that his motivation was looking at all the hidden hypotheses of Bell’s Theorem to try to figure out whether any of them might be false. He later published a paper on negative probabilities, but it didn’t mention this motivation, presumably because he realized that negative probabilities couldn’t explain quantum weirdness.
Dear Peter,
I have made what I hope is a strong case for taking seriously what we might call the “John Bell point of view” in my recent book Einstein’s Unfinished Revolution, and find myself mostly in agreement with Eric Dennis and Marko. I would very briefly underline a few points:
-This is not a debate resting on confusions of words. Nor is there, to my knowledge, confusion among experts about the direct implications of the experimental results that test the Bell inequalities. The main non-trivial assumption leading to those inequalities is a statement that is usually labeled “Bell-locality”. Roughly this says (given the usual set up) that the “choice of device setting at A cannot influence the same-time output at B”.
Nothing from either quantum mechanics nor classical mechanics is assumed. The experiments test “Bell-locality” and the experimental results are (after careful examination of loop-holes etc.) that the inequality is cleanly and convincingly violated in nature. Therefor “Bell-locality” is false in nature.
-The conclusion that “Bell-locality” is false in nature is an objective fact. It does not depend on what view you may hold on the ultimate correctness, completeness or incompleteness of QM. Bohmians, Copenhagenists,Everetians etc all come to the same conclusion.
-Nor does it matter if there are other senses of “local” in which nature or QFT is local, because these notions are independent of “Bell-locality”. Indeed, it is important to understand that “Bell-locality” is false in QM and QFT, as Bell showed directly. Therefor, the experimental result that “Bell-locality” is false in nature is an important confirmation of a prediction of QM and QFT.
-Now it is also true that Bell and a few others, for various and different reasons, held or hold a view that QM is incomplete, and that nature could be described more accurately by a different theory. Examples of such theories are dBB and collapse models. This has also nothing to do with confusion over the meaning of words. These are competing hypotheses about nature, to be settled ultimately by experiment.
For people in this camp, the falseness of “Bell-locality” is an important clue which constrains any such proposal for a completion of QM. But if you are not in this camp, and have no interest in the hypothesis that QM requires a completion to describe nature, you can just ignore the further implications.
Thanks,
Lee
Thanks for the clarifications Lee.
By the way, beside Lee’s new book, I can recommend the interview with him posted at Sabine Hossenfelder’s blog
http://backreaction.blogspot.com/2019/06/a-conversation-with-lee-smolin-about.html
Lee,
Well put! My view is that “Bell locality” is the one True locality, because what could possibly be a better definition than ‘no effects from outside the past lightcone’? But your line of attack here is probably more comprehensible to people with a conventional understanding of QFT.
Peter,
One natural impediment in coming at this stuff from the standpoint of conventional QFT is the scary question of what happens to QFT now. I think this fear has a really bad affect on people’s ability even to permit Bell’s analysis into the realm of Reasonable Views That I May Take Seriously.
Of course QFT isn’t just rendered invalid as a scientific theory. There’s just way too much ultra-precise and very diverse experimental confirmation. I can see only one natural explanation of this state of affairs: Lorentz invariance (more broadly, general covariance) is an emergent symmetry of nature at a particular energy scale, not a fundamental symmetry. This is controversial even within the Bell camp, however.
Eric Dennis,
I’m not worried about QFT. The Bell analysis clearly has nothing at all to do with QFT.
As for using this as a motivation for deciding that Lorentz symmetry is emergent, I don’t see that at all.
By the way, I’m developing a serious allergy to the endless claims now being heard that this or that about the SM or GR is not fundamental, but “emergent”, with no idea what it is “emergent” from. I do thing classical behavior is properly described as “emergent”, but in that case, we know what it emerges from (QM). If you want to start claiming QM is “emergent” too, you need some compelling story about where/when it fails as a description of nature, and what is supposed to replace it.
My view is that “Bell locality” is the one True locality, because what could possibly be a better definition than ‘no effects from outside the past lightcone’?
Sorry, neither “Bell locality” nor its failure means “no effects from outside the past lightcone”. What ever entangled wave-function that a Bell-type experiment is measuring originated in the past lightcone of both the measurement apparatus.
Dear Lee Smolin,
I’m sorry, but what you have written is false.
To start with, you said that “Indeed, it is important to understand that “Bell-locality” is false in QM and QFT, as Bell showed directly.” It is important to notice that Bell’s direct demonstration that “Bell-locality” is false in QM and QFT depends crucially on the collapse of the wavefunction. As it should be obvious, since unitary QFT is Lorenz-covariant.
Which brings us to the second point. You wrote that “The conclusion that “Bell-locality” is false in nature is an objective fact. It does not depend on what view you may hold on the ultimate correctness, completeness or incompleteness of QM. Bohmians, Copenhagenists,Everetians etc all come to the same conclusion.” Now how could Everettians come to this conclusion, since the failure of “Bell-locality” depends on the collapse of the wavefunction?
And indeed they don’t come to this conclusion, empirically speaking. If you check the literature you’ll see that Everettians insist that quantum mechanics is (Bell) local. The appearance of nonlocality is caused by the mistaken assumption that measurements have a single outcome. This is elegantly put by Deutsch and Hayden in section 7 of their 1999 paper. A similar argument, perhaps clearer, is made by Brown and Timpson in section 9 of their 2014 paper. I’ve also blogged about his point here.
Mateus,
“It is important to notice that Bell’s direct demonstration that “Bell-locality” is false in QM and QFT depends crucially on the collapse of the wavefunction. As it should be obvious, since unitary QFT is Lorenz-covariant.”
You have to be careful not to mix the Lorentz-invariance of the theory with the invariance of its solutions. The preparation and measurement are technically understood as boundary conditions, which fix a particular solution of the equations of motion. There is nothing surprising in the fact that a particular solution is not Lorentz-invariant — this is common even in the classical theory, and not specific to the collapse postulate in QM. When we say that QFT is Lorentz-invariant, we mean that the *dynamics* (encoded in the action or EoMs) is invariant, and this is true regardless of the measurement. In QM, the analogy would be that the Heisenberg equations of motion are Galilei-invariant, regardless of the fact that preparation and measurement can fix a particular frame of reference.
“Everettians insist that quantum mechanics is (Bell) local. The appearance of nonlocality is caused by the mistaken assumption that measurements have a single outcome.”
Last time I checked with experimentalists, they did see a single outcome in every run of every experiment ever done. Do Everettians insist that these people are delusional? How do they explain any experimental results at all?
I understand that the measurement problem is hard, and that the collapse postulate may not be a perfect or complete solution, but ignoring that the problem exists is not a solution either, IMO.
Best, 🙂
Marko
Mateus,
Sure QM/QFT by itself is a local theory, the problem is that QM/QFT by itself is also a theory without any predictions. As soon as you want to make any connection to experiments, you have to talk about collapse in some way – and be it by reducing your consideration to one of the Everettian branches.
This is IMHO the main point that Everettians are missing: While it is mathematically perfectly consistent as a theory – and it looks like physics because the unitary dynamics has the same description as in standard QM (with ill-defined projection postulate) – the “anything goes” approach of Many Worlds is not a scientific theory, it is just a mathematical construct.
In this regard, an Everettian standpoint on QM is in no way better than the String Theory Landscape. That is also what puzzles me a bit about Peter’s criticism of QM foundations, because sociologically I see a lot of parallels between string theorists and large parts of the quantum info community, and I also read (with great pleasure) the criticism of Many Worlds on this blog. But Many Worlds is an unvoidable consequence if you take QM/QFT by itself without any modifications that explain collapse.
[Edit: Marko was a bit faster with his reply.]
Peter, you wrote:
If you are going to write about this sort of thing you really need to accept that words like that are useless unless their meanings are very carefully specified, because different people honestly use them to mean very different things. Otherwise, rational debate is impossible.
Most physicists consider “locality” to be an essential feature of our theories (or of “reality”), and if you ask them why, they will probably point at Einstein (after all, Newtonian gravity was explicitly non-local). So it is relevant that what Lee Smolin calls “Bell-locality” is very clearly Einstein’s personal view, implicit in the EPR paper and stated explicitly in his “Reply to Critics” in the Schlipp volume. I find it odd that modern relativists like Hossenfelder choose to use “locality” in a much more restricted sense than the founder, just so that they can insist that QFT is “local”. One might even ask, who is doing the hijacking here, and why is it so important to keep the label, even though it no longer denotes anything more than a technical condition?