Some background on “high energy physics”

There was a workshop last week at the Harvard CMSA, focusing on new ideas about physics rooted in topology. Talks are available on the workshop webpage, and those interested in high energy physics might be most interested in the ones from the first session. There was an interesting introductory talk by Dan Harlow, in which he lays out his view (which I think is a very mainstream one) of the current situation of HEP theory.

He begins by noting the problem of building higher energy accelerators (claiming that the problem is that technological limits make the maximum energy of collisions go as the square root of the radius of the machine, but I think really for proton-proton machines it is linear in the radius, for electron-positron machines the fourth root of the radius). Given the lack of new data, he describes one tactic for theorists as to change fields, e.g. to machine learning or biophysics.

If one does want to persist, he argues there still is a list of things incompatible with the Standard Model (gravity, dark matter, neutrino masses, baryogenesis, inflation) and these are not just “aesthetic” problems (here he refers to misunderstandings in the “popular media”, a clear reference to Sabine Hossenfelder and her book). From there he focuses on quantum gravity, essentially arguing that the other problems can be addressed by BSM models, but none of these seem particularly nice, so without new data progress is unlikely.

He describes quantum gravity as the ideal situation for theorists, since according to him there’s no self-consistent theory that fits the data we already have (I guess he’s saying string theory models are inconsistent…). He describes current work on this as based on two main strategies, with AdS/CFT providing a link between them:

  • “Study the non-realistic corners of string theory where mathematical control is possible”, i.e. pick some non-physical string theory background (e.g. AdS/CFT) where you think you can do self-consistent calculations and do those, hoping to get some more general insight.
  • “Set aside gravity for the moment, and focus on understanding the mathematical properties of QFT.” He gives a few examples of general questions being studied (which unfortunately have no obvious relevance to addressing the problem of quantum gravity, or basic problems like that of non-perturbative QCD.)

In the question section, there was an exchange between Harlow and Seiberg, based on Harlow’s reference to changing fields because of no data and to something he said during the talk (at 2:06):

Harlow: So then, what are we supposed to do in the meantime, right? You know we need to keep writing papers and posting them to hep-th and so on, so what do we do?

I suspect that for some context to the following exchange, you should also look at the video of the panel discussion earlier this year at Strings 2019, where Harlow, sitting next to Seiberg says (at 6:45) “We’re having fun, isn’t that the important thing?”.

Seiberg: I’d like to make one comment.

This was a beautiful summary, spectacular, except that one thing was fundamentally wrong and certainly should not be said. It’s not that we’re doing what we’re doing because we have to fill the time (audience laughter). We’re doing what we’re doing because it’s very important (audience laughter). I don’t think about “maybe we should write some books and this and that, until we have more information” I think this is wrong and this should not be [inaudible]

Harlow: I’m doing it, right, I don’t like wasting my time, so, I think it’s worth my time. I do think it’s important. We have this list of phenomena that we can see and can’t explain.

Seiberg: Comments like these have been used against us (audience laughter), in addition to the fact that they are wrong.

Harlow: OK, yeah, yeah, I’m not talking to the New York Times, right. (audience laughter).

Dam Son?: Is it recorded?

Harlow: I don’t know actually (audience laughter), I’ve said much worse things that were recorded, so.

HEP theory is at a very difficult point in its history, and it seems that the older generation struggling with this is not particularly amused to hear what sounds like flippant takes on the problem from the younger generation.

Update: I finally got around listening to the Susskind interview mentioned in this comment. Susskind also has given up on particle physics:

I originally was officially an elementary particle physicist. Elementary particles is not going so well, there’s no new experimental input and nobody knows what to do. It’s sort of reaching a point of, should I call it diminishing returns? It could change, it could easily change. I don’t think it’s doing very well. It’s not the fault of the physicists, it’s just the fact that they’ve reached a barrier, with no possible access experimentally to things that we’re not doing very well figuring out theoretically. So that’s not doing exceptionally well. My guess is the same thing may happen to cosmology. That they will eventually run, and they’re very close to it now, running out of new data, so there may be a barrier there.

Update: This week in Chicago there’s a workshop on the CEPC (proposed large new electron-positron collider in China). The first talk Monday was from Nima Arkani-Hamed. At the end of it, the question period started, with an exchange that resonates with the Harlow-Seiberg one:

Mike Peskin: So, let me make a quick summary of this talk: “my prediction is that when we go to high precision with the Higgs we will see no deviation from the Standard Model, but that will be a good thing because theorists will be inspired to think about these fundamental questions.”

Nima Arkani-Hamed: Absolutely. I’ve said it many times. Many people don’t believe me, but I believe it 100 percent. If we see some deviation, fantastic, great, people will have a lot of fun figuring it out, if we don’t see a deviation that’s a much, much bigger gauntlet thrown down at the feet of theorists to try to figure out what is happening.

Mike Peskin: But on the other hand you’re not promising any concrete discovery, just we reconfirm the Standard Model at a much higher level of energy.

Nima Arkani-Hamed: Reconfirming the Standard Model would just crank up the screws that are put on our theoretical imaginations even more.

Mike Peskin: How many billions of dollars do you expect people to spend to reach this conclusion?

Nima Arkani-Hamed: … However many billions it takes.

At the same conference, today Matthew Reece gave a talk on The Hierarchy Problem and the Motivation for Future Colliders. He starts out with:

I’ll review some arguments that may be well-known to many of us—but which I find are not necessarily well-known to students, some of whom are being taught that there is no motivation to search for BSM physics.

and gives this I think accurate characterization of the problem:

The better way to frame the problem, and the role of fine-tuning, is that we are seeking a theory that explains the origin of the EW scale.

If, within that theory, the EW scale is extremely sensitive to input parameters, it’s not a very good explanation. The theory does not generically describe a universe like the one we live in.

If moving around in parameter space just produces modest changes in the low-energy physics, that’s a compelling theory that predicts a world like ours.

This characterization makes clear what the correct interpretation of the null LHC results should be: they provide significant evidence that the picture of a very high energy scale GUT/string theory with lots of parameters, generically producing the weak-scale physics that we see, is just wrong. There never has been any evidence for this anyway, so the failure of the hierarchy argument was to be expected. To the extent that you believe the hierarchy problem is the motivation for BSM physics, students who are being taught to give up on BSM physics by Harlow and others are not really being misled.

My own take on all of this: what Harlow and Arkani-Hamed get wrong is their claim that thinking about fundamental issues of quantum gravity is some new, exciting question that has just come up post-LHC null results. These issues have been there for decades; they were obvious at the time I was a grad student in the early eighties. The problem is what to do facing several decades of failure by theorists, and I don’t think the answer is to make outrageous claims about how wonderful the current situation is. The motivation for a new collider is the one Reece points to, ignoring the business about the hierarchy problem: we don’t understand at all the origin of the EW scale. This is the best argument for studying the scales just above it that the LHC has started to enter. If we can get some new insight into the EW scale from a detailed study of the scales just above it, that will revolutionize physics (not just be “a lot of fun”). If we can’t, we’re facing a very, very tough time, especially if we insist on pursuing fundamental theory the way it has been pursued in the past.

Posted in Uncategorized | 14 Comments

Chasing Einstein

Last night I went to a showing of Chasing Einstein, a new documentary about the search for dark matter. It’s quite well done, and if you’re near New York, Berkeley or LA, you might want to take the opportunity to go see it in a theater.

The film starts out with a segment on LIGO, talking to Barry Barish and Rainer Weiss. Later on there are scenes from their Nobel celebration ceremony at Caltech and the award ceremony in Stockholm. There are no claims made that LIGO’s results are related to dark matter. Rather, this material functions as a counterpoint to the dark matter material, contrasting a great success story to the rather frustrating lack of success that physicists have had with dark matter.

Attention then turns to Elena Aprile and the Xenon1T experiment. Aprile is in the physics department at Columbia, and attended the screening I was at. I think she’s the great heroine of this film, although a bit of a tragic one. She and her collaborators have done a fantastic job of getting a series of highly sensitive detectors to work. If a WIMP particle responsible for dark matter had existed in the region advertised by many theories, they would have found it and followed the LIGO people to Stockholm. Instead they put a strong limit on the possible properties of such a conjectured particle. The film includes a heart-breaking scene when they unblind their data, quickly realizing that their years of effort haven’t been rewarded with the discovery that they had been hoping for. Aprile has a realistic take on the prospects for future experiments of this kind: they can be make somewhat more sensitive, but it’s hard to be optimistic that the remaining accessible parameter space contains a new particle.

Attention then turns to Erik Verlinde and his “Emergent Gravity” explanation for the dark matter phenomenon. I’ve never found the motivation for this compelling, so haven’t followed his work carefully. For someone who has, see Sabine Hossenfelder’s blog where she has written on the topic quite a few times (and has her own version of a model here). Grad student Margot Brouwer worked on this attempt to experimentally test Verlinde’s ideas, and she is also featured in the film. My understanding is that the positive results her group found are matched by other more negative results, see here.

Tech entrepreneur Cree Edwards appears at various points in the film, and I’m guessing that he’s the one who brought together the physicists and filmmakers to make the film (and probably financed it). He has an amateur’s interest in fundamental physics, and his questioning of the physicists reminds one of how people’s fascination with the subject is often deeply connected to their desire to make sense of the world, hoping to find explanations of the great puzzles of human existence. I fear he’s not likely to find much of what he’s looking for in physics, but glad to see that his questioning led to an excellent film.

Finally, the film contains scenes of observing a solar eclipse, an added attraction.

Posted in Film Reviews | 15 Comments

An Apology

I’m afraid I made a serious mistake in this previous posting discussing Sean Carroll’s new book. Since the book was relatively reasonable, while the jacket and promotional material that came with it were nonsense, I assumed that Carroll was just being ill-served by his publisher. It’s now clear I was very wrong. He’s on a book tour, and the nonsense is exactly what he is putting front and center as a revelation to the public about how to understand quantum mechanics. For a couple examples, here’s what was on the PBS News Hour

The “many worlds” theory in quantum mechanics suggests that with every decision you make, a new universe springs into existence containing what amounts to a new version of you. Bestselling author and theoretical physicist Sean Carroll discusses the concept and his new book, “Something Deeply Hidden,” with NewsHour Weekend’s Tom Casciato.

and here’s something from his talk down the street from me.

Using your public platform to tell people that the way to understand quantum mechanics is that the world splits depending on what you decide to do is simply What the Bleep? level stupidity. Those in the physics and science communication communities who care about the public understanding of quantum mechanics should think hard about what they can do to deal with this situation. They may however come to the same conclusion I’ve just reached: best to ignore him, which I’ll try to do from now on.

Posted in Quantum Mechanics | Comments Off on An Apology

Regarding Papers about Fundamental Theories

Discussion in the comment section of the previous blog entry led me to do a little bit of historical research this morning, and I thought I’d write up the results here. First of all, for some interesting comments from people around back then about how attitudes in the physics community changed during the 1970s, see here, here and here.

What I looked into is one specific story, trying to figure out what was behind Sean Carroll’s claim in the New York Times that

For years, the leading journal in physics had an explicit policy that papers on the foundations of quantum mechanics were to be rejected out of hand.

Mark Hillery here notes that this is likely a reference to the Physical Review, and that it very much has not been true for the past 15 years, during which he has been an editor there.

Tracing back where Carroll got this from, I guessed that (since it’s the historical source he recommends in his book) it came from Adam Becker’s book, What is Real?. Looking at that book one finds on page 214:

Physical Review actually had an explicit editorial policy barring papers on quantum foundations unless they could be related to existing experimental data or made new predictions that coulhered be tested in the laboratory.

This matches Carroll’s claim (with the part inconvenient for his case deleted…). Becker’s source notes for this text refer to an editorial in the July 15, 1973 issue of Physical Review D (Particles and Fields) written by Samuel Goudsmit, the editor-in-chief. The editorial is entitled “IMPORTANT ANNOUNCEMENT: Regarding Papers about Fundamental Theories”. Goudsmit does not specifically refer to quantum foundations papers, but writes:

The subject matter of these papers usually concerns a fundamental aspect of theoretical physics. Extreme verbosity and vagueness of expression makes these papers hard to read and understand. A paucity of mathematics as compared to wordage distinguishes them from the more conventional theoretical papers. The author proposes new theories, but their specific assumptions are usually hidden behind very lengthy arguments. Sometimes the paper contains a reinterpretation of existing theories which the author considers more satisfactory than the prevailing views, though no new experimental consequences are expected.

He sets forth the following as features expected of articles publishable by the Physical Review:

All implied assumptions must be stated clearly and concisely and as much as possible expressed in mathematical form.

The author must convincingly show

  • that these assumptions lead to the explanation of hither to unexplained observations, or
  • that these assumptions expose new relations between known data or theories, or
  • that these assumptions are simpler and fewer than in existing theories.

Moreover, the author must show that the new assumptions do not contradict existing experimental facts.

He must also investigate possible new consequences of his assumptions and whether these could be tested by new experiments.

Looking some more into this, I realized that I had first seen this story in David Kaiser’s book How the Hippies Saved Physics (see review here), which clearly is Becker’s source (Becker’s next note refers to this). On page 121 Kaiser has:

The longtime editor of the Physical Review… actually banned articles on the interpretation of quantum mechanic. He went so far as to draw up a special instruction sheet to be mailed to referees of potentially offending submissions: referees were to reject all submissions on interpretive matters out of hand, unless the papers derived quantitative predictions for new experiments.

Kaiser goes on to quote John Clauser as pointing out that according to this policy, Bohr’s response to the 1935 EPR paper would not have been publishable. His source notes refer to the Goudsmit editorial and private emails from Clauser on July 8, 2009. The same note also refers to an article by Clauser, Early History of Bell’s Theorem, which has a lot of detailed information about the story of the reception of Bell’s theorem and early efforts to do experimental tests (but nothing about the Physical Review policy). By the way, back in 1964, Bell decided not to submit his important paper to Physical Review, not because of any policy they might have had, but because they had page charges.

So, as far as I can tell, the historical record shows that the documented Physical Review policy didn’t, as the descriptions by Kaiser, Becker and Carroll suggest, explicitly refer to papers on the interpretation of quantum mechanics or quantum foundations. Possibly it was such papers that were annoying Goudsmit and led to his editorial, but I’d be curious to know if anyone knows more about what was specifically bothering Goudsmit. What sort of papers were being submitted to Physical Review D around 1972-73 that would uncharitably fit the negative description he gives quoted above?

Update: In the comments Blake Stacey suggests that the 1972 experiment of Freedman-Clauser may have been what led to papers being submitted to Physical Review that inspired Goudsmit’s July 1973 editorial. Looking more into this, it’s quite possible that the kind of thing Goudsmit was concerned about were the sorts of papers Jack Sarfatti was writing around this time. According to Kaiser’s book (pages 62-63), it was just a few months before this that Sarfatti decided to change the sorts of papers he was writing:

By the early 1970s, having published a few articles in prestigious journals on quantum theory, elementary particles, and even some idiosyncratic ideas about miniature black holes, Sarfatti could list half a dozen distinguished physicists scattered across the United States, Britain and France as references to vouch for the quality of his work…
… Sarfatti began to lose enthusiasm for his position at San Diego State during the early 1970s, and indeed for the sterile direction in which he saw theoretical physics heading. He announced his new plans in a letter to renowned Princeton physicist John Wheeler in the spring of 1973… Sarfatti declared that he would leave his “uninspiring institution” and seek out “the best possible environment to create a great and historic piece of physics. I feel impelled by history – a certain sense of destiny,” he explained. (“I recognize that I may be suffering under some sort of ‘crackpot’ delusion, but I cannot accept that as likely. In any case, I must try,” he averred).

See the comment here for some of the sort of papers Sarfatti was writing at the time, quite possibly submitting them to Physical Review. The opening sentence of Goudsmit’s description of the problem (and the fact that he was publishing it in Phys. Rev. D, Particles and Fields)

The subject matter of these papers usually concerns a fundamental aspect of theoretical physics.

seems to me more likely to be referring to the sort of thing Sarfatti was writing than to papers on the interpretation of quantum mechanics.

Update: It turns out that Goudsmit’s papers are available online, here. A non-exhaustive search turned up no evidence pro or con for my conjecture about Sarfatti or similar papers. I only found one set of files (Box 50, Folder 45, “Leibowitz refereeing, 1973”) referring to his 1973 editorial. These have to do with this paper, which was published September 1973, after two years of refereeing. This publication led to another author writing a paper criticizing the first, leading to another refereeing problem. Goudsmit weighed in (January 5, 1976) by noting that it was exactly this kind of paper and the problems with refereeing such things that his editorial had been concerned with. Note that the paper in question is NOT an interpretation/foundations paper. Goudsmit writes:

The event shows again clearly the necessity of rapid rejections of questionable papers in vague borderline areas. There is a class of long theoretical papers which deal with problems of interpretation of quantum and relativistic phenomena. Most of them are terribly boring and belong to the category of which Pauli said, “It is not even wrong”. Many of them are wrong. A few of the wrong ones turn out to be valuable and interesting because they throw a brighter light on the correct understanding of the problem. I have earlier expressed my strong opinion that most of these papers don’t belong in the Physical Review but in journals specializing in the philosophy and fundamental concepts of physics.

He then refers to his earlier editorial.

Looking at the exchange in these letters, the referee of the second paper writes “I would suggest that [the author] understands neither relativity nor quantum theory.” This example suggests that the problem Goudsmit had in mind when writing his editorial was not a need for “an explicit policy that papers on the foundations of quantum mechanics were to be rejected out of hand”, but just a need to deal with the common problem that is still with us, for both journals and the arXiv. There are lots and lots of people writing low quality papers claiming to say something new about the foundations of physics, on a continuum from the crank to the not so bad. Refereeing such things is difficult and time-consuming, so a journal needs a policy to deal with them quickly and efficiently, otherwise they end up with the mess described in these letters. Goudsmit’s editorial I think was an attempt to come up with such a policy.

Update: Jorge Pullin wrote to remind me of an earlier Goudsmit story, which I described on the blog here, but had completely forgotten about. In 2008 an undated paper from Bryce DeWitt’s files (he passed away in 2004) was posted on the arXiv. It included a claim much like the ones discussed here that refer to the 1973 editorial, but about a much earlier incident:

Most of you can have no idea how hostile the physics community was, in those days, to persons who studied general relativity. It was worse than the hostility emanating from some quarters today toward the string-theory community. In the mid fifties Sam Goudsmidt, then Editor-in-Chief of the Physical Review, let it be known that an editorial would soon appear saying that the Physical Review and Physical Review Letters would no longer accept “papers on gravitation or other fundamental theory.” That this editorial did not appear was due to the behind-the-scenes efforts of John Wheeler.

I don’t know of any other evidence for this (took a quick look in the Goudsmit online archive, didn’t see anything). It seems highly likely that this claim about Goudsmit and the Physical Review is not accurate. One minor problem with a claimed “mid-fifties” planned editorial for Physical Review Letters is that PRL wasn’t even started until mid-1958. More seriously, the idea that the Physical Review in the mid-fifties would consider banning “papers on gravitation or other fundamental theory” is just completely implausible, and if that phrase is accurate, it surely is very much taken out of context. This story is very similar to the Carroll one about the 1973 editorial, and I’m guessing the true story about the mid-fifties incident is again just that Goudsmit was even then struggling with how to deal with bad “not even wrong” theory papers about fundamental physics.

Update: Steven Weinberg has a version of the “mid-fifties” Goudsmit story, in his biographical notice for DeWitt, in the context of a discussion of the January 1957 Chapel Hill conference on gravity:

Samuel Goudsmit had recently threatened to ban all papers on gravitation from Physical Review and Physical Review Letters because he and most American physicists felt that gravity research was a waste of time.

Again, there’s a problem with this that PRL wasn’t even started until a year and a half later, and he has Goudsmit specifying just GR research, not the wider “gravitation or other fundamental theory” which DeWitt gave in quotes.

Posted in Favorite Old Posts, Quantum Mechanics | 26 Comments

Quantum Supremacy II

About the only thing that has transcended the bitter partisan divisions between Democrats and Republicans in the US during recent years has been quantum mechanics, with the enactment late last year of the National Quantum Initiative Act (the NQI was first mentioned on the blog here). In March there was a National Quantum Coordination Office established at the White House, and last week there was an executive order establishing a National Quantum Initiative Advisory Committee.

The NQI directs the federal government to spend \$1.2 billion over the next five years, with the NSF told to create two to five “Multidisciplinary Centers for Quantum Research and Education” and the DOE two to five “National Quantum Information Science Research Centers”. Besides the NQI, pretty much everywhere you look the past few years you see new well-funded “quantum” centers popping up, two randomly chosen examples would be the Chicago Quantum Exchange and the Yale Quantum Institute. In the private sector, a huge investment in quantum science is taking place, driven by hopes that quantum computing and other applications will lead to a technological revolution and associated vast riches.

Looking at new books on fundamental physics that I’ve seen over the past year and a half, the conventional enthusiastic treatment of string theory/SUSY/extra dimensions is now dead, with Sabine Hossenfelder’s Lost in Math the only popular book addressing these topics, and doing so in a quite negative way. The new trendy topic is the foundations of quantum mechanics, with the recent publication of Adam Becker’s What is Real?, Philip Ball’s Beyond Weird, Anil Ananthaswamy’s Through Two Doors at Once, Lee Smolin’s Einstein’s Unfinished Revolution, George Greenstein’s Quantum Strangeness, and Sean Carroll’s Something Deeply Hidden. Forthcoming from Oxford University Press are two quantum books by Jim Baggott, Quantum Reality and The Quantum Cookbook.

On the whole this change in hot topic is a positive development, although the fact that it’s driven by a lack of anything new to say about particle physics and unification is rather depressing. On the quantum front, while I think it’s great that public attention is being drawn to quantum mechanics, if you look at my reviews you’ll see that I have mixed feelings about the point of view taken by some of the recent books (the best of the lot I think is Philip Ball’s).

The latest example of the high public profile of quantum mechanics is the publication today in the New York Times of a piece by Sean Carroll arguing that Even Physicists Don’t Understand Quantum Mechanics: worse, they don’t seem to want to understand it. Unfortunately I don’t think that this article accurately describes the issues surrounding what we do and don’t understand about “quantum foundations”, nor the dramatically improving funding prospects for research in this area. In addition I don’t think that it’s accurate, fair (or good for public relations) to portray your colleagues as “not really interested in how nature really works”, somehow not curious or bright enough to realize (see here) that there is a crisis at the heart of their subject and that, thanks to Sean Carroll:

the crisis can now come to an end. We just have to accept that there is more than one of us in the universe. There are many, many Sean Carrolls. Many of every one of us.

Posted in Quantum Mechanics | 26 Comments

Some Math News

My Columbia colleague Patrick Gallagher passed away a few months ago at the age of 84. He had only recently retired, and for many years was the longest serving member of the department and an important part of its institutional memory. On October 10 there will be a memorial conference here at Columbia.

It’s too bad Pat didn’t live to see the latest from Terry Tao, who describes recent results which are related to old work of Gallagher’s by “Our proof of this theorem proceeds more or less along the same lines as Gallagher’s calculation, but now with k allowed to grow slowly with x.”

Turning to other topics, Peter Scholze continues to come up with new ideas about the foundations of mathematics at a pace far too fast for me to fool myself into thinking I might be able to follow what he’s doing. In recent months he has run a course on “Condensed Mathematics”, which involves new ideas about topology developed with Dustin Clausen. For a video of a recent talk where he explains this, see here.

On the Langlands/representation theory front, some interesting things are:

Finally, the 2019 PCMI was devoted to the topic of Quantum Field Theory and Manifold Invariants, videos are here.

Update: Also on the Langlands/representation theory/quantum front, this week at Northeastern there’s a conference going on (rumored to be celebrating the fact that Etingof and Okounkov share the same birthday, and are now 50). Videos are starting to appear here, including one of Edward Frenkel discussing the paper with Etingof and Kazhdan mentioned above.

Posted in Langlands, Obituaries | 7 Comments

Something Deeply Hidden

Sean Carroll’s new (available in stores early September) book, Something Deeply Hidden, is a quite good introduction to issues in the understanding of quantum mechanics, unfortunately wrapped in a book cover and promotional campaign of utter nonsense. Most people won’t read much beyond the front flap, where they’ll be told:

Most physicists haven’t even recognized the uncomfortable truth: physics has been in crisis since 1927. Quantum mechanics has always had obvious gaps—which have come to be simply ignored. Science popularizers keep telling us how weird it is, how impossible it is to understand. Academics discourage students from working on the “dead end” of quantum foundations. Putting his professional reputation on the line with this audacious yet entirely reasonable book, Carroll says that the crisis can now come to an end. We just have to accept that there is more than one of us in the universe. There are many, many Sean Carrolls. Many of every one of us.

This kind of ridiculous multi-worlds woo is by now rather tired, you can find variants of it in a host of other popular books written over the past 25 years. The great thing about Carroll’s book though is that (at least if you buy the hardback) you can tear off the dust jacket, throw it away, and unlike earlier such books, you’ll be left with something well-written, and if not “entirely reasonable”, at least mostly reasonable.

Carroll gives an unusually lucid explanation of what the standard quantum formalism says, making clear the ways in which it gives a coherent picture of the world, but one quite a bit different than that of classical mechanics. Instead of the usual long discussions of alternatives to QM such as Bohmian mechanics or dynamical collapse, he deals with these expeditiously in a short chapter that appropriately explains the problems with such alternatives. The usual multiverse mania that has overrun particle theory (the cosmological multiverse) is relegated to a short footnote (page 122) which just explains that that is a different topic. String theory gets about half a page (discussed with loop quantum gravity on pages 274-5). While the outrageously untrue statement is made that string theory “makes finite predictions for all physical quantities”, there’s also the unusually reasonable “While string theory has been somewhat successful in dealing with the technical problems of quantum gravity, it hasn’t shed much light on the conceptual problems.” AdS/CFT gets a page or so (pages 303-4), with half of it devoted to explaining that its features are specific to AdS space, about which “Alas, it’s not the real world.” He has this characterization of the situation:

There’s an old joke about the drunk who is looking under a lamppost for his lost keys. When someone asks if he’s sure he lost them there, he replies, “Oh no, I lost them somewhere else, but the light is much better over here.” In the quantum-gravity game, AdS/CFT is the world’s brightest lamppost.

I found Carroll’s clear explanations especially useful on topics where I disagree with him, since reading him clarified for me several different issues. I wrote recently here about one of them. I’ve always been confused about whether I fall in the “Copenhagen/standard textbook interpretation” camp or “Everett” camp, and reading this book got me to better understanding the difference between the two, which I now think to a large degree comes down to what one thinks about the problem of emergence of classical from quantum. Is this a problem that is hopelessly hard or not? Since it seems very hard to me, but I do see that limited progress has been made, I’m sympathetic to both sides of that question. Carroll does at times too much stray into the unfortunate territory of for instance Adam Becker’s recent book, which tried to make a morality play out of this difference, with Everett and his followers fighting a revolutionary battle against the anti-progress conservatives Bohr and Heisenberg. But in general he’s much less tendentious than Becker, making his discussion much more useful.

The biggest problem I have with the book is the part referenced by the unfortunate material on the front flap. I’ve never understood why those favoring so-called “Multiple Worlds” start with what seems to me like a perfectly reasonable project, saying they’re trying to describe measurement and classical emergence from quantum purely using the bare quantum formalism (states + equation of motion), but then usually start talking about splitting of universes. Deciding that multiple worlds are “real” never seemed to me to be necessary (and I think I’m not the only one who feels this way, evidently Zurek also objects to this). Carroll in various places argues for a multiple world ontology, but never gives a convincing argument. He finally ends up with this explanation (page 234-5):

The truth is, nothing forces us to think of the wave function as describing multiple worlds, even after decoherence has occurred. We could just talk about the entire wave function as a whole. It’s just really helpful to split it up into worlds… characterizing the quantum state in terms of multiple worlds isn’t necessary – it just gives us an enormously useful handle on an incredibly complex situation… it is enormously convenient and helpful to do so, and we’re allowed to take advantage of this convenience because the individual worlds don’t interact with one another.

My problem here is that the whole splitting thing seems to me to lead to all sorts of trouble (how does the splitting occur? what counts as a separate world? what characterizes separate worlds?), so if I’m told I don’t need to invoke multiple worlds, why do so? According to Carroll, they’re “enormously convenient”, but for what (other than for papering over rather than solving a hard problem)?

In general I’d rather avoid discussions of what’s “real” and what isn’t (e.g. see here) but, if one is going to use the term, I am happy to agree with Carroll’s “physicalist” argument that our best description of physical reality is as “real” as it gets, so the quantum state is preeminently “real”. The problem with declaring “multiple worlds” to be “real” is that you’re now using the word to mean something completely different (one of these worlds is the emergent classical “reality” our brains are creating out of our sense experience). And since the problem here (classical emergence being just part of it) is that you don’t understand the relation of these two very different things, any argument about whether another “world” besides ours is “real” or not seems to me hopelessly muddled.

Finally, the last section of the book deals with attempts by Carroll to get “space from Hilbert space”, see here, which the cover flap refers to as “His [Carroll’s] reconciling of quantum mechanics with Einstein’s theory of relativity changes, well, everything.” The material in the book itself is much more reasonable, with the highly speculative nature of such ideas emphasized. Since Carroll is such a clear writer, reading these chapters helped me understand what he’s trying to do and what tools he is using. From everything I know about the deep structure of geometry and quantum theory, his project seems to me highly unlikely to give us the needed insight into the relation of these two subjects, but no reason he shouldn’t try. On the other hand, he should ask his publisher to pulp the dust jackets…

Update: Carroll today on Twitter has the following argument from his book for “Many Worlds”:

Once you admit that an electron can be in a superposition of different locations, it follows that person can be in a superposition of having seen the electron in different locations, and indeed that reality as a whole can be in a superposition, and it becomes natural to treat every term in that superposition as a separate “world”.

“Becomes natural” isn’t much of an argument (faced with a problem, there are “natural” things to do which are just wrong and don’t solve the problem). To me, saying one is going to “treat every term in that superposition as a separate “world”” may be natural to you, but it doesn’t actually solve any problem, instead creating a host of new ones.

Update: Some places to read more about these issues.

The book Many Worlds?: Everett, Quantum Theory and Reality gathers various essays, including
Simon Saunders, Introduction
David Wallace, Decoherence and Ontology
Adrian Kent, One World Versus Many

David Wallace’s book, The Emergent Multiverse.

Blog postings from Jess Riedel here and here.

This from Wojciech Zurek, especially the last section, including parts quoted here.

Posted in Book Reviews, Multiverse Mania, Quantum Mechanics | 76 Comments

What’s the difference between Copenhagen and Everett?

I’ve just finished reading Sean Carroll’s forthcoming new book, will write something about it in the next few weeks. Reading the book and thinking about it did clarify various issues for me, and I thought it might be a good idea to write about one of them here. Perhaps readers more versed in the controversy and literature surrounding this issue can point me to places where it is cogently discussed.

Carroll (like many others before him, for a recent example see here), sets up two sides of a controversy:

  • The traditional “Copenhagen” or “textbook” point of view on quantum mechanics: quantum systems are determined by a vector in the quantum state space, evolving unitarily according to the Schrödinger equation, until such time as we choose to do a measurement or observation. Measuring a classical observable of this physical system is a physical process which gives results that are eigenvalues of the quantum operator corresponding to the observable, with the probability of occurrence of an eigenvalue given in terms of the state vector by the Born rule.
  • The “Everettian” point of view on quantum mechanics: the description given here is “The formalism of quantum mechanics, in this view, consists of quantum states as described above and nothing more, which evolve according to the usual Schrödinger equation and nothing more.” In other words, the physical process of making a measurement is just a specific example of the usual unitary evolution of the state vector, there is no need for a separate fundamental physical rule for measurements.

I don’t want to discuss here the question of whether the Everettian point of view implies a “Many Worlds” ontology, that’s something separate which I’ll write about when I get around to writing about the new book.

What strikes me when thinking about these two supposedly very different points of view on quantum mechanics is that I’m having trouble seeing why they are actually any different at all. If you ask a follower of Copenhagen (let’s call her “Alice”) “is the behavior of that spectrometer in your lab governed in principle by the laws of quantum mechanics” I assume that she would say “yes”. She might though go on to point out that this is practically irrelevant to its use in measuring a spectrum, where the results it produces are probability distributions in energy, which can be matched to theory using Born’s rule.

The Everettian (let’s call him “Bob”) will insist on the point that the behavior of the spectrometer, coupled to the environment and system it is measuring, is described in principle by a quantum state and evolves according to the Schrödinger equation. Bob will acknowledge though that this point of principle is useless in practice, since we don’t know what the initial state is, couldn’t write it down if we did, and couldn’t solve the relevant Schrödinger equation even if we could write down the initial state. Bob will explain that for this system, he expects “emergent” classical behavior, producing probability distributions in energy, which can be matched to theory using Born’s rule.

So, what’s the difference between the points of view of Alice and Bob here? It only seems to involve the question of how classical behavior emerges from quantum, with Alice saying she doesn’t know how this works, Bob saying he doesn’t know either, but conjectures it can be done in principle without introducing new physics beyond the usual quantum state/Schrödinger equation story. Alice likely will acknowledge that she has never seen or heard of any evidence of such new physics, so has no reason to believe it is there. They both can agree that understanding how classical emerges from quantum is a difficult problem, well worth studying, one that we are in a much better position now to work on than we were way back when Bohr, Everett and others were struggling with this.

Posted in Quantum Mechanics | 27 Comments

Where We Are Now

For much of the last 25 years, a huge question hanging over the field of fundamental physics has been that of what judgement results from the LHC would provide about supersymmetry, which underpins the most popular speculative ideas in the subject. These results are now in, and conclusively negative. In principle one could still hope for the HL-LHC (operating in 2026-35) to find superpartners, but there is no serious reason to expect this. Going farther out in the future, there are proposals for an extremely expensive 100km larger version of the LHC, but this is at best decades away, and there again is no serious reason to believe that superpartners exist at the masses such a machine could probe.

The reaction of some parts of the field to this falsification of hopes for supersymmetry has been not at all the abandonment of the idea that one would expect. For example, today brings the bizarre news that failure has been rewarded with a $3 million Special Breakthrough Prize in Fundamental Physics for supergravity. For uncritical media coverage, see for instance here, here, and here.

Some media outlets do better. I first heard about this from Ryan Mandelbaum, who writes here. Ian Sample at the Guardian does note that negative LHC results are “leading many physicists to go off the theory” and quotes one of the awardees as saying:

We’re going through a very tough time… I’m not optimistic. I no longer encourage students to go into theoretical particle physics.

At Nature, the sub-headline is “Three physicists honoured for theory that has been hugely influential — but might not be a good description of reality” and Sabine Hossenfelder is quoted. At her blog, she ends with the following excellent commentary:

Awarding a scientific prize, especially one accompanied by so much publicity, for an idea that has no evidence speaking for it, sends the message that in the foundations of physics contact to observation is no longer relevant. If you want to be successful in my research area, it seems, what matters is that a large number of people follow your footsteps, not that your work is useful to explain natural phenomena. This Special Prize doesn’t only signal to the public that the foundations of physics are no longer part of science, it also discourages people in the field from taking on the hard questions. Congratulations.

In related news, yesterday I watched this video of a recent discussion between Brian Greene and others which, together with a lot of promotional material about string theory, included significant discussion of the implications of the negative LHC results. A summary of what they had to say would be:

  • Marcelo Gleiser has for many years been writing about the limits of scientific knowledge, and sees this as one more example.
  • Michael Dine has since 2003 been promoting the string theory landscape/multiverse, with the idea that one could do statistical predictions using it. Back then we were told that “it is likely that this leads to a prediction of low energy supersymmetry breaking” (although Dine soon realized this wasn’t working out, see here.) In 2007 Physics Today published his String theory in the era of the Large Hadron Collider (discussed here), which complained about how “weblogs” had it wrong that string theory had no relation to experiment. That piece claimed that

    A few years ago, there seemed little hope that string theory could make definitive statements about the physics of the LHC. The development of the landscape has radically altered that situation.

    and that

    The Large Hadron Collider will either make a spectacular discovery or rule out supersymmetry entirely.

    Confronted by Brian with the issue of LHC results, Dine looks rather uncomfortable, but claims that there still is hope for string theory and the landscape, that now big data and machine learning can be applied to the problem (for commentary on this, see here). He doesn’t though expect to see success in his lifetime.

  • Andy Strominger doesn’t discuss supersymmetry in particular, but about the larger superstring theory unification idea, tries to make the case that it hasn’t been a failure at all, but a success way beyond what was expected. The argument is basically that the search for a unified string theory was like Columbus’s search for a new sea route to China. He didn’t find it, but found something much more exciting, the New World. In this analogy, instead of finding some tedious reductionist new layer of reality as hoped, string theorists have found some revolutionary new insight about the emergent nature of gravity:

    I think that the idea that people were excited about back in 1985 was really a small thing, you know, to kind of complete that table that you put down in the beginning of the spectrum of particles…

    We didn’t do that, we didn’t predict new things that were going to be measured at the Large Hadron Collider, but what has happened is so much more exciting than our original vision… we’re getting little hints of a radical new view of the nature of space and time, in which it really just is an approximate concept, emergent from something deeper. That is really, really more exciting, I mean it’s as exciting as quantum mechanics or general relativity, probably even more so.

    The lesson Strominger seems to have learned from the failure of the 1985 hopes is that when you’ve lost your bet on one piece of hype, the thing to do is double down, go for twice the hype…

Update: The Breakthrough Prize campaign to explain why supergravity is important despite having no known relation to reality has led to various nonsense making its way to the public, as reporters desperately try to make sense of the misleading information they have been fed. For instance, you can read (maybe after first reading this comment) here that

Witten showed in 1981 that the theory could be used to simplify the proof for general relativity, initiating the integration of the theory into string theory.

You could learn here that

When the theory of supersymmetry was developed in 1973, it solved some key problems in particle physics, such as unifying three forces of nature (electromagnetism, the weak nuclear force, and the strong nuclear force)

Update: On the idea that machine learning will solve the problems of string theory, see this yesterday from the Northeastern press office, which explains that the goal is to “unify string theory with experimental findings”:

Using data science to learn more about the large set of possibilities in string theory could ultimately help scientists better understand how theoretical physics fits into findings from experimental physics. Halverson says one of the ongoing questions in the field is how to unify string theory with experimental findings from particle physics and cosmology…

Update: Physics World has a story about this that emphasizes the sort of criticism I’ve been making here.

As mentioned in the comments, I took a closer look at the citation for the prize. The section on supersymmetry is really outrageous, using “supersymmetry stabilizes the weak scale” as an argument for SUSY, despite the fact that this has been falsified by LHC results.

Update: Jim Baggott writes about this story and post-empirical science here.

Noah Smith here gets the most remarkable aspect of this right. String theory has always had the feature that the strings were not supposed to be visible at accessible energies, so not directly testable. Supersymmetry is quite different: it has always been advertised as a directly testable idea, with superpartners supposed to appear at the electroweak scale and be seen at the latest at the LHC. Giving a huge prize to a theoretical idea that has just been conclusively shown to not work is something both new and outrageous.

Update: Tommaso Dorigo’s take is here, which I’d characterize as basically “any publicity is good publicity, but it’s pretty annoying the cash is going to theorists for failed theories instead of experimentalists”(he does say he wanted to entitle the piece “Billionaire Awards Prizes To Failed Theories”):

[Rant mode on] An exception to the above is, of course, the effect that this not insignificant influx of cash and 23rd-hour recognition has on theoretical physicists. For they seem to be the preferred recipients of the breakthrough prize as of late, not unsurprisingly. Apparently, building detectors and developing new methods to study subnuclear reactions, which are our only way to directly fathom the unknown properties of elementary particles, is not considered enough of a breakthrough by Milner’s jury as it is to concoct elegant, albeit wrong, theories of nature. [Rant mode off]

Going back to the effect on laypersons: this is of course positive. Already the sheer idea that you may earn enough cash to buy a Ferrari and a villa in Malibu beach in one shot by writing smart formulas on a sheet of paper is suggestive, in a world dominated by the equation “is paid very well, so it is important”. But even more important is the echo that he prize – somewhere by now dubbed “the Oscar of Physics” – is having on the media. Whatever works to bring science to the fore is welcome in my book.

Posted in Uncategorized | 48 Comments

Quick Links

A few quick links:

  • Philip Ball at Quanta has a nice article on “Quantum Darwinism” and experiments designed to exhibit actual toy examples of the idea in action (I don’t think “testing” the idea is quite the right language in this context). What’s at issue is the difficult problem of how to understand the way in which classical behavior emerges from an underlying quantum system. For a recent survey article discussing the ideas surrounding Quantum Darwinism, see this from Wojciech Zurek.

    Jess Riedel at his blog has a new FAQ About Experimental Quantum Darwinism which gives more detail about what is actually going on here.

  • This year’s TASI summer school made the excellent choice of concentrating on issues in quantum field theory. Videos, mostly well worth watching, are available here.
  • This month’s Notices of the AMS has a fascinating article about Grothendieck, by Paulo Ribenboim. It comes with a mysterious “Excerpt from” title and editor’s note:

    Ribenboim’s original piece contains some additional facts that are not included in this excerpt. Readers interested in the full text should contact the author.

  • I’ve finally located a valuable Twitter account, this one.
Posted in Uncategorized | 13 Comments