Rochester Colloquium Talk

I’m heading up to Rochester this evening, will give a colloquium talk there in the physics department on Wednesday at 3:45. I’ll put up a link to the slides after the talk, for now, here’s the abstract:

Particle theory: a view from a neighboring field

High energy particle physics faces a challenging future, largely because of the overwhelming success of the Standard Model. The LHC discovery of a Higgs particle with exactly the predicted properties, coupled with the lack of evidence there for “Beyond the Standard Model” physics, has led some to characterize this as a “crisis”.

In this talk I’ll consider the current situation from a somewhat unusual point of view, that of someone who began his career in physics departments doing particle theory, but then moved to mathematics departments. The field of mathematics has complex and close ties to fundamental physical theory, and the cross-cultural perspective it provides may be of interest.


Update
: Apologies for the earlier mistake (I had “Thursday” instead of “Wednesday” above). The slides for the talk are available here.

Posted in Uncategorized | 22 Comments

Applied Group Theory

I just noticed that Greg Moore has been teaching a wonderful course in recent years with the misleadingly bland title of Applied Group Theory. His choice of the topics he wants to cover given here is an excellent one and a good outline for anyone trying to get themselves a serious education in the modern overlap of math and physics.

The problem with this outline is that it’s far too ambitious to cover in a one-semester course, starting just from basics. Moore notes that in 2008 and 2009 versions of the course he only got through roughly half the topics, with students still complaining about the fast pace of the course. In 2013 he only made it through two out 21 topics, but in doing so generated two book-length documents of notes:

These each contain a wealth of valuable material. I do hope he someday writes up the other 19 topics, but if he does it the way he has been going, the length might turn out to be around 4000 pages, so that might take a while. In the meantime, an account of some of them is available here.

In addition, there’s also a list of suggested topics for term papers, nearly a hundred of them, each with a description of an interesting issue that has been a topic of significant research, with references for where to start learning about the topic.

Posted in Uncategorized | 6 Comments

Various and Sundry

  • I recently spent some time looking at old postings on this blog, partly because of writing this blog entry, partly because Gil Kalai got me a copy of his book Gina Says. For a moment I thought this would be a good time to write something about the “String Wars”, but then decided that project should wait for another time. I did go quickly through old postings (there are 1660 of them…) and pick out a small subset that might be more worth reading for anyone with time on their hands. A list is available by selecting the category Favorite Old Posts.
  • Another category of blog posts that includes many that I spent more time than usual writing is that of Book Reviews, of which there are 93 here (about ten of these were written for publication elsewhere). Among the forthcoming books I’m hoping to write about are Sabine Hossenfelder’s Lost in Math, and the fifth volume of Raoul Bott’s Collected Works (listed at Target under “test prep and study guides”). Some other forthcoming books are Sean Carroll’s Something Deeply Hidden, and a new book by Brian Greene that I know nothing about other than this.
  • A debate various places on Twitter about science journalism and accuracy included this from neuroscientist Chris Chambers, who explains that when he looked into this he discovered what I’ve often seen in physics reporting: the source of hype is more often scientists and their press releases than journalists.
    https://twitter.com/chrisdc77/status/960304692449435648
  • The nLab project has been joined by the even more exciting mLab project (some discussion here).
  • This semester MSRI is running a program on enumerative geometry, with two workshops so far, materials here and here. A lot of this subject has been influenced by ideas from physics, in particular from topological quantum field theories. While my Columbia colleague Andrei Okounkov has been on leave this year, he’s written two excellent surveys of some recent work, see here (for the ICM) and here. For older surveys from him, see here and here.

Update: I hear of yet another book in progress: Lee Smolin on realist approaches to quantum foundations, tentative title “Beyond the Quantum”.

Update: Today (February 12) the Harvard Physics department is hosting a celebration of the centennial of Julian Schwinger.

Update: One of the prize possessions of my youth was a copy of Abramowitz and Stegun, a huge Dover paperback version of their reference tome Handbook of Mathematical Functions. Physics Today has a long new article about the twenty-first century version of this, the Digital Library of Mathematical Functions, now a project run by the NIST.

Update
: This week there’s a workshop on “Naturalness” going on in Aachen. Results from last year’s LHC run should start to appear soon, see here.

Posted in Uncategorized | 11 Comments

Muon g-2 Anomaly Gone?

I just learned some interesting news from Tommaso Dorigo’s blog. Go there for more details, but the news is the claim in these three papers that, accounting for GR effects on the precision measurement of the muon anomalous magnetic moment, the three sigma difference between experiment and theory goes away.

This sort of calculation needs to be checked by other experts in the field, and provides an excellent example of where you want good peer review. Presumably we’ll hear fairly soon whether the result holds up (the papers are not long or complicated). If this is right, it’s a fantastic example of our understanding of fundamental physics at work, with the muon g-2 experiments measuring something they weren’t even looking for, a subtle effect of general relativity.

Also interesting will be the implications for the ongoing experiment at Fermilab trying to achieve an even more precise g-2 measurement. I’m wondering whether there is any way for them to isolate the GR effect on their measurement.

The significance of this is that (setting aside questions about the neutrino sector), the muon g-2 measurement is the most prominent one I’m aware of where there has been a serious (three sigma) difference between experiment and Standard Model theory. This has often been interpreted as evidence for SUSY extensions of the SM. Projects producing “fits” that “predict” SUSY particles with masses somewhat too high to have been seen yet at the LHC use the g-2 anomaly as input. Tommaso ends by asking what happens to these fits if the g-2 anomaly goes away.

Update: For some recent things to read about the g-2 anomaly, before this latest news, see here and here.

Update: Rumor about a problem with this calculation here.

Update: According to this comment and this one, the g-2 collaboration has identified a problem with the calculation, making the predicted GR effect unobservably small. If the authors agree, presumably we’ll soon see a corrected version of the paper(s).

Looking for more information about this, I ran across two blogs I hadn’t seen before. String theorist Joe Conlon of Oxford has a blog called Fickle and Freckled. Not much there besides a posting supporting Brexit and denouncing the fact that nearly all of his colleagues disagree with him, which he fears will cost the university money. A second one is string/SUSY phenomenologist Mark Goodsell’s Real Self-Energy. Goodsell seems to be a Lubos fan, dealing with the current g-2 story by linking to his discussion and denouncing the authors as “effectively crackpots”. He also seems to be unhappy with a certain blogger:

I regularly read two or three physics blogs, since they report on the latest news (and rumours). Now, one of these blogs is very popular whose ostensible purpose is to persuade people that string theory is a misguided research topic. Obviously, this is something I disagree with. However, it also talks a lot about high-energy physics generally, and being rather well-connected it can be quite informative and useful. However, it pretty much uniformly takes a very pessimistic line about all concrete ideas for new physics. It is difficult to overstate how damaging this has been, in making physicists and scientists in neighbouring fields depressed about the future of high-energy physics, and opposing this trend is one of the reasons I would like to blog …

Goodsell has devoted his career to string and SUSY phenomenology, and seems to feel that this is the “future of high-energy physics”, and I’m responsible for making people discouraged about it. Perhaps he should stop advertising Lubos and denouncing crackpots, instead pay attention to the negative results from the LHC and the evidence they provide that his research program is a failure. Blaming his problems on me for pointing them out isn’t going to make them go away.

Update: As mentioned in the comments, Matt Visser now has a preprint criticizing the calculation in these articles.

Posted in Uncategorized | 54 Comments

15 Years of Multiverse Mania

Today is the 15th anniversary of the event that kicked off the Multiverse Mania that continues to this day, recently taking the form of a concerted attack on conventional notions of science. 2018 has seen an acceleration of this attack, with the latest example appearing this weekend.

On January 29, 2003, Kachru, Kallosh, Linde and Trivedi submitted a paper to the arXiv that outlined a construction of a supposed model of a metastable string theory state that had all moduli fixed. Ever since the first explosion of interest in string theory unification in 1984-5, it had been clear that a big problem with using string theory to get anything that looks like known physics was the so-called “moduli problem”. If you try and use 10d superstring theory to describe our universe, you need to somehow hide six of the dimensions, and the best way to do that seemed to be to argue that superstring theory implied one could do this by compactifying on an unobservably small approximately Calabi-Yau manifold. Such manifolds however come in families labeled by “moduli” parameters, which can be thought of as describing the size and shape of the Calabi-Yau. These moduli will show up as zero mass fields generating new long-range forces unless some dynamical mechanism could be found to fix their values. It was this that KKLT claimed to have found. I won’t even try to describe the complex KKLT proposal, which was aptly described by Lenny Susskind as a “Rube Goldberg mechanism”.

What string theorists had been hoping for was a moduli stabilization mechanism that would pick out specific moduli field values, getting rid of the unwanted dozens of new long-range forces and providing a way to make physical predictions. While the KKLT mechanism got rid of the unwanted forces, it had been observed three years earlier by Bousso and Polchinski, working with just parts of the Rube Goldberg mechanism, that this sort of thing led to not one specific value of the moduli fields, but an exponentially large number of possibilities. They had noted that this could allow an anthropic solution to the cosmological constant problem, and the KKLT fixing of all the moduli provided a model that accomplished this (without the long range forces).

KKLT did not mention anthropics and the multiverse, but less than a month later Lenny Susskind published The Anthropic Landscape of String Theory, a call to arms for anthropics and a founding document of Multiverse Mania. He immediately went to work on writing a book-length version of string theory multiverse propaganda aimed at the public, The Cosmic Landscape, which was published in 2005. Less than a month after Susskind’s manifesto, Michael Douglas published a statistical analysis of supposed string/M-theory vacua, and at some point the estimated number $10^{500}$ of vacua started appearing based on this sort of calculation.

I didn’t notice KKLT when it appeared, but did notice the Susskind arXiv article. I had just finished writing the first version of my book, and remember that my reaction to the Susskind article was roughly “Wow, if people like Susskind are arguing in effect that you can’t predict anything with string theory, that’s going to pull the plug on the subject.” The book took a while to find a publisher, and by the time it was published I had tacked on a chapter about the multiverse problem. I started this blog in March 2004, and recently looked back at some of the earliest postings, noticing that a huge amount of time was spent arguing with people about KKLT and its implications for the predictivity of string theory. It seemed clear to me from looking at the calculations people were doing that this kind of thing could not ever lead to a prediction of anything. I won’t go over those arguments, but claim that my point of view has held up well (no prediction of anything has ever emerged from such calculations, for reasons that are obvious if you start looking at them).

Back in 2003-4 I never would have believed that the subject would end up in the state it finds itself in now. With the LHC results removing the last remaining hope for observational evidence relevant to string theory unification, what we’ve been seeing the last few years has been a concerted campaign to avoid admitting failure by the destructive tactic of trying to change the usual conception of testable science. Two examples of this from last week were discussed here, and today there’s a third effort along the same lines, Quantum Multiverses, by Hartle. Unlike the others, this one includes material on the interpretation of quantum mechanics one may or may not agree with, but of no relevance to the fundamental problem of not having a predictive theory that can be tested.

I’m wasting far too much time discussing the obvious problems with articles like this, to no perceptible effect. Hartle like the others completely ignores the actual arguments against his position (he lists some references. describing them as “known to the author (but not necessarily read carefully by the author)”). In a section on “A FAQ for discussion” we find arguments that include

  • The cosmological multiverse is falsifiable, because maybe you’ll falsify quantum mechanics.
  • The cosmological multiverse is testable: “by experiment if a very large spacetime volume could be prepared with an initial quantum state from which galaxies, stars, life etc would emerge over billions of years.” Not surprisingly, no indication is given of how we will produce such a state or any theory that would describe what would happen if we did.
  • The theory of evolution is just like the theory of the cosmological multiverse.

Both the absurdity and the danger of this last argument are all too clear.

By the way, for a while earlier this year the arXiv started allowing trackbacks again to this blog, but then this stopped again. The origin of the ban seems to have been in the story described here and my early criticism of the string theory multiverse. I have no idea what their current justification for the ban is.

Update: A good place to look for information about the current state of string landscape calculations is at the website for this workshop. The idea that the problems of this subject can be solved by “modern techniques in data science” seems to me absurd, but for a different point of view, look at the slides of Michael Douglas. For something more sensible, try the talk by Frederik Denef, which describes some of the fundamental intractable problems:

  • You don’t have a complete theory, with only some non-perturbative corrections known, no systematic understanding of these.
  • Dine-Seiberg Problem: When corrections can be computed, they are not important, and when they are important, they cannot be computed.
  • Measure Problem: Whenever a landscape measure is strongly predictive, it is wrong, and when it’s not, we don’t know if it’s right.
  • Tractability Problem: Whenever a low energy property is selective enough to single out a few vacua, finding these vacua is intractable.

Denef does make some very interesting comments about where modern techniques in data science might actually be useful: dealing not with the landscape of string vacua, but with the huge landscape of string theory papers (e.g. the 15,000 papers that refer to the Maldacena paper). He argues:

For obvious reasons, besides time constraints, incentives to write papers are much stronger for research scientists than to read them. So printed stacks pile up unread, PDFs remain ambitiously open until we reboot our laptops, recursive reference-backtracking gets sidetracked by the deluge of micro-distractions puncturing our days. This, plus the sheer volume of disorganized pages of important results, leads to loss of access to crucial knowledge, to repeated duplication of efforts, and to many other
inefficiencies. Worst of all, it becomes increasingly harder for young brilliant minds to stand on the shoulders of giants, and thus to make revolutionary new discoveries. It seems inevitable that we will have to outsource the tedious task of parsing the literature, in search for relevant results, insights, questions and inspiration, to the Machines.

Update: There’s an interesting article at Quanta magazine about “Big Bounce” models of the Big Bang, competitors to inflationary models. Paul Steinhardt gives his take on the multiverse: “hogwash.”

Posted in Multiverse Mania | 29 Comments

Quick Links

Various things that may be of interest, ordered from abstract math to concrete physics:

  • Jacob Lurie is teaching a course this semester on Categorical Logic. Way back when I was a student at Harvard this is the kind of thing I would have found very exciting, much less convinced of that now.
  • Talks from a workshop earlier this month on representation theory are available here.
  • The Harvard Gazette has an article about a project to develop a “pictorial mathematical language” first proposed by Arthur Jaffe. The project has a website here. It is being funded by an offshoot of the Templeton Foundation I didn’t know about, the Templeton Religion Trust, with one of their grants, TRT0080: “Concerning the Mathematical Nature of the Universe”, described as “exploring whether or not the universe admits of a consistent description, or more generally, whether our universe [can] be described by mathematics?”. They’re advertising a postdoc position here.
  • Adam Marsh has a wonderful book on Mathematics for Physics, especially from the geometrical point of view, with lots of detailed illustrations. It’s available from World Scientific here, or as a website here (there are also articles on the arXiv, here and here).
  • There’s a new Leinweber Center for Theoretical Physics at the University of Michigan, funded by an $8 million grant from the Leinweber Foundation. Inaugural talk was Arkani-Hamed on The Future of Fundamental Physics.
  • Each year recently there has been a Physics of the Universe Summit, described by some as involving “one of those glitterati Hollywood banquets”. Some years ago, the glitterati evidently were interested in particle physics, recently instead it is quantum computing and AI (see last year and this year). At this year’s glitterati banquet, a kidnapping of Kip Thorne occurred.
  • Alex Dragt has a book about Lie methods, with applications to Accelerator physics. If you’re looking for detailed very explicit information about symplectic transformations, there is a wealth of such material in this book.
  • I’m hearing a rumor (via an anonymous comment here) that HyperKamiokande has been denied funding. Can anyone confirm or deny?
Posted in Uncategorized | 10 Comments

Beyond Falsifiability

Sean Carroll has a new paper out defending the Multiverse and attacking the naive Popperazi, entitled Beyond Falsifiability: Normal Science in a Multiverse. He also has a
Beyond Falsifiability blog post here.

Much of the problem with the paper and blog post is that Carroll is arguing against a straw man, while ignoring the serious arguments about the problems with multiverse research. The only explanation of the views he is arguing against is the following passage:

a number of highly respected scientists have objected strongly to the idea, in large part due to a conviction that what happens outside the universe we can possibly observe simply shouldn’t matter [4, 5, 6, 7]. The job of science, in this view, is to account for what we observe, not to speculate about what we don’t. There is a real worry that the multiverse represents imagination allowed to roam unfettered from empirical observation, unable to be tested by conventional means. In its strongest from, the objection argues that the very idea of an unobservable multiverse shouldn’t count as science at all, often appealing to Karl Popper’s dictum that a theory should be falsifiable to be considered scientific.

The problem here is that none of those references contain anything like the naive argument that if we can’t observe something, it “simply shouldn’t matter”, or one should not speculate about it, or it “shouldn’t count as science at all.” His reference 7 is to this piece by George Ellis at Inference, which has nothing like such arguments, and no invocation of falsifiability or Popper. Carroll goes on to refer approvingly to a response to Ellis by Daniel Harlow published as a letter to Inference, but ignores Ellis’s response, which includes:

The process of science—exploring cosmology options, including the possible existence or not of a multiverse—is indeed what should happen. The scientific result is that there is no unique observable output predicted in multiverse proposals. This is because, as is often stated by proponents, anything that can happen does happen in most multiverses. Having reached this point, one has to step back and consider the scientific status of claims for their existence. The process of science must include this evaluation as well.

Ellis here is making the central argument that Carroll refuses to acknowledge: the problem with the multiverse is that it’s an empty idea, predicting nothing. It is functioning not as what we would like from science, a testable explanation, but as an untestable excuse for not being able to predict anything. In defense of empty multiverse theorizing, Carroll wants to downplay the role of any conventional testability criterion in our understanding of what is science and what isn’t. He writes:

The best reason for classifying the multiverse as a straightforwardly scientific theory is that we don’t have any choice. This is the case for any hypothesis that satisfies two criteria:

  • It might be true.
  • Whether or not it is true affects how we understand what we observe.

This seems to me an even more problematic and unworkable way of defining science than naive falsifiability. This whole formulation is extremely unclear, but it sounds to me as if various hypotheses about supreme beings and how they operate would by this criterion qualify as science.

Carroll also ignores the arguments made in my letter in the same issue (discussed here), which were specifically aimed at the claims for multiverse science that he is trying to make. According to him, multiverse theory is perfectly conventional science which just happens to be hard to evaluate:

That, in a nutshell, is the biggest challenge posed by the prospect of the multiverse. It is not that the theory is unscientific, or that it is impossible to evaluate it. It’s that evaluating it is hard.

The main point I was trying to make in the piece Carroll ignores is that the evaluation problem is not just “hard”, but actually impossible, and if one looks into the reason for this, one finds that it’s because his term “the theory” has no fixed reference. What “theory” is he talking about? One sort of “theory” he discusses are eternal inflation models of a multiverse in which you will have bubble collisions. Some such models predict observable effects in the CMB. Those are perfectly scientific and easy to evaluate, just wrong (since we see no such thing). Other such models predict no observable effect, those are untestable. “Hardness” has nothing to do with it, the fact that there is some narrow range of models where tests are in principle possible but hard to do is true but irrelevant.

The other actual theory Carroll refers to is the string theory landscape, and there the problem is not that evaluating the theory is “hard”, but that you have no theory. As for bubble collisions, you have plenty of conjectural models (i.e. “string vacua”) which are perfectly well-defined and scientific, but disagree with experiment so are easily evaluated as wrong. While many other conjectural models are very complex and thus technically “hard” to study, that’s not the real problem, and acquiring infinitely powerful computational technique would not help. The real problem is that you don’t have a theory: “M-theory” is a word but not an actual theory. The problem is not that it’s “hard” to figure out what the measure on the space of string vacua is, but that you don’t even know what the space is on which you’re looking for a measure. This is not a “hard” question, it’s simply a question for which you don’t have a theory which gives an answer.

I do hope someday Carroll and other multiverse fans will some day get around to addressing the real arguments being made, perhaps then this subject could move forward from the sorry state it seems to be stuck in.

Update: Philosopher of science Massimo Pigliucci has a very good discussion of this debate at his blog. Sabine Hossenfelder has a piece at 13.7 on Scientific Theory and the Multiverse Madness.

Update: Coel Hellier has a blog posting here taking Carroll’s side of the debate.

Update: Yet another new argument for multiverse mania as usual science on the arXiv, this time from Mario Livio and Martin Rees. The same problems with the Carroll article recur here, including the usual refusal to acknowledge that serious counter-arguments exist. They give no references at all to anyone disagreeing with them, instead just knock down the usual straw man, those unknown scientists who think that theorists should only discuss directly observable quantities:

We have already discussed the first main objection — the sentiment that envisaging causally disconnected, unobservable universes is in conflict with the traditional “scientific method.” We have emphasized that modern physics already contains many unobservable domains (e.g., free quarks, interiors of black holes, galaxies beyond the particle horizon). If we had a theory that applied to the ultra-early universe, but gained credibility because it explained, for instance, some features of the microphysical world (the strength of the fundamental forces, the masses of neutrinos, and so forth) we should take seriously its predictions about ‘our’ Big Bang and the possibility of others.

We are far from having such a theory, but the huge advances already made should give us optimism about new insights in the next few decades.

Livio and Rees do here get to the main point: we don’t have a viable scientific theory of a multiverse that would provide an anthropic explanation of the laws of physics. The causes for optimism that they list are the usual ones involving inflationary models that give essentially the same physics in other universes, not the different physics they need for anthropics. There is one exception, a mention of how:

accelerator experiments can (in principle) generate conditions in which a number of metastable vacuum solutions are possible, thereby testing the premises of the landscape scenario.

They give no reference for this claim and I think it can accurately be described as utter nonsense. It’s also (in principle) possible that accelerator experiments will generate conditions in which an angel will pop out of the interaction region bearing the laws of physics written on gold tablets. But utterly implausible speculation with no evidence at all backing it is not science.

The authors note that:

an anthropic explanation can be refuted, if the actual parameter values are far more ‘special” than anthropic constraints require.

The problem with this is that you don’t have a theory that gives you a measure on parameter values, so you don’t know what is ‘special’ and what isn’t. As I keep pointing out, the fundamental problem here is even more basic that not having a probability measure on possible universes: we have no viable theory of what the space of possible universes is, much less any idea of how to calculate a measure on it. And no, we are not seeing any progress towards finding such a theory, quite the opposite over the past decades.

Truly depressing is that even the best of our journalists see this kind of article, written by two multiverse enthusiasts and giving no references or serious arguments for the other side, as “even-handed”.

Update: Two new excellent pieces explaining the problems with the multiverse. Ethan Siegel in particular explains the usually ignored problem that the kind of inflation we have any evidence for doesn’t give you different laws of physics, and ends with

The Multiverse is real, but provides the answer to absolutely nothing.

Sabine Hossenfelder explains four of the arguments generally given for why the Multiverse is science, answering them each in turn, with conclusions:

1. It’s falsifiable!

So don’t get fooled by this argument, it’s just wrong.

2. Ok, so it’s not falsifiable, but it’s sound logic!

So don’t buy it. Just because they can calculate something doesn’t mean they describe nature.

3. Ok, then. So it’s neither falsifiable nor sound logic, but it’s still business as usual.

So to the extent that it’s science as usual you don’t need the multiverse.

4. So what? We’ll do it anyway.

so you are allowed to believe in it. And that’s all fine by me. Believe whatever you want, but don’t confuse it with science.

Posted in Favorite Old Posts, Multiverse Mania | 40 Comments

The Quantum Spy

I don’t often read spy thrillers, but just finished one, The Quantum Spy, by David Ignatius. Ignatius is a well-known journalist at the Washington Post, specializing in international affairs and the intelligence community (and known to some as The Mainstream Media’s Chief Apologist for CIA Crimes). While the book is fiction, it’s also clearly closely based on reality. Sometimes writing this sort of “fiction” allows an author to provide their take on aspects of current events that confidentiality prevents them from writing about as “non-fiction”. Another example of this kind of writing is that of the now deceased French writer Gérard de Villiers,  who wrote a large number of spy novels informed by his connections in the intelligence community. Unlike the often pornographic de Villiers, Ignatius treats the love-making of CIA spies with beautiful Mata-Haris discreetly.

The topic of The Quantum Spy is Chinese spying on American research in quantum computing. This is very much in the news these days: after finishing the book I picked up today’s paper to read about the arrest of a Chinese-American ex-CIA agent on charges of being a mole spying for the Chinese (a central theme of the Ignatius novel is the divided loyalties of a Chinese-American CIA agent). In the same issue of the paper is a Tom Friedman opinion piece about quantum computing breakthroughs and how “China, the N.S.A., IBM, Intel and Google are now all racing — full of sweat — to build usable quantum systems” that will revolutionize our lives.

I am no expert on quantum computing, but I do have quite a bit of experience with recognizing hype, and the Friedman piece appears to be well-loaded with it. In contrast, at least in describing the the state of technology, the novel does a pretty good job of sticking to reality. Ignatius clearly spent quite a bit of time talking to those very knowledgeable about this. One part of his story is about a company closely based on D-Wave, and he explains that the technology they have is different than the true quantum computer concept that is being pursued by others. Majorana fermions and topologically protected states make an appearance in another part of the story. One character’s reading material to orient himself is Scott Aaronson’s Quantum Computing Since Democritus.

The novel portrays the US and Chinese governments as highly concerned and competitive about quantum computing technology and its security implications. I’d always naively assumed that classified research on quantum computing was carried on just by groups within the NSA or other security agencies, but Ignatius tells a different story. According to him, what happens is that groups performing unclassified government-funded quantum computing research in the open can find themselves forced to “go dark”, with their work going forward classified and no longer publicly accessible. His plot revolves around Chinese efforts to get information about such research. I have no idea whether this is complete fantasy or based in the reality of the situation.

In the novel and in real life, there are some analogies between the quantum computing story and the role nuclear physics played in the cold war between the US and Russia. Nuclear and particle physics arguably benefited a great deal for many years from governments worried about trying to get an edge in weapons technology, and to some extent the physics of quantum computing is starting to take on that same role, with (at least in the novel) the Chinese now playing the Russian role.

Just as particle physics likely got a lot of funding and public attention because of nuclear weapons, some parts of physics are now well-funded and high profile because of their connection to quantum computers. In fundamental theoretical physics, the hot topic is the idea that the old dream of replacing space and time with something more “quantum” is going to be realized as “it from qubit”, somehow using ideas from quantum computation to get an emergent quantum theory of gravitation. All my attempts to try and understand how this is supposed to work have left me rather mystified. Last week there was a Simons Foundation-funded school in Bariloche about this, with both ‘t Hooft and Maldacena lecturing on “Black holes and Quantum Information”. Perhaps these lectures will be informative and made available. This summer the IAS will host a school on From Qubits to Spacetime, maybe I’ll try again then to figure out what is going on by looking at its materials.

Posted in Book Reviews | 10 Comments

Adventures in Fine Hall

Every so often I get a copy of Princeton’s alumni publication in the mail, which I mostly ignore. The latest one however had an entertaining article about the Princeton mathematics department during the 1930s, entitled Adventures in Fine Hall. Various physicists (often misidentified as mathematicians) also make an appearance.

The article is based on an oral history project from the 1980s (Princeton Library website here, archive.org site here). It includes many stories I’d never heard before, including one about Hermann Weyl:

When attendance at his lectures shrank to three, Weyl threatened to end the course if it shrank further. One day when the third student got sick, the other two students “went out and got one of the janitorial staff to come and sit in the room, so there would be three people in the room and Weyl would give his lecture.”

Not quite the same, but this reminds me a bit of a story a Columbia colleague likes to tell about one of Claude Chevalley’s calculus classes here at Columbia during the 1950s. Supposedly (accuracy of story not guaranteed) students got together to complain to the chair that they couldn’t follow Chevalley’s lectures. After someone was dispatched to attend a lecture, and reported back that it was not surprising the students weren’t following, a deal was made with the students. Someone else would be found to give them lectures in parallel with Chevalley’s, at a different time, as long as they agreed to keep going to Chevalley’s lectures. Things are different now, hard to get students to go to one set of calculus lectures, much less two…

For more Princeton math and physics history, the Institute for Advanced Study has its own oral history project (started by Frank Wilczek’s wife, Betsy Devine), website here. I don’t know if any of those materials are available without going down to Princeton. The IAS has an extensive archive, with a lot of material available online (see for instance here). Poking around I noticed for instance Hermann Weyl’s Faculty file (here, here, here and here) and a memo Weyl prepared in 1945 evaluating various physicists and mathematicians as possible hires.

For those interested in IAS history, the archive describes a history of the years 1930-50 there which was commissioned, but not published since Oppenheimer felt it “portrayed the Institute in a less than flattering light.” A copy of this document is however now available from the IAS here and here.

Posted in Uncategorized | 10 Comments

Various News

Now back from vacation in a much warmer location than New York. Some things I noticed while away:

  • I see that Paris has a bid to host the 2022 ICM. Everyone should strongly support this, one can’t have too many excuses for a trip to that city.
  • I’m pleased to see that Sabine Hossenfelder will now be a columnist for Quanta magazine. She’s starting off with a piece on asymptotic safety. Also recommended is her new arXiv preprint on fine-tuning and naturalness.
  • There’s an Indian interview with Nati Seiberg, much of which consists of defending string theory research against the accusation of being a science with no scientific evidence. It’s more or less the usual defense that, despite failure as a theory of unification, string theory research has led to other progress in fields such as condensed matter, astrophysics, cosmology and pure mathematics. One problem with this is that it’s very unclear now what “string theory research” really is these days, other than a sociological term. About the failure to find SUSY as predicted, he says:

    One idea is that we will find SUSY particles at the Large Hadron Collider (LHC) but this hasn’t happened yet. So given that it hasn’t, I think the odds that it would happen in the near future are very small. It’s not a likely scenario. If you had asked me 10 or 20 years ago, I would have thought it was quite reasonable that they would find it. But they haven’t and that idea did not prove to be right. But people who worked on it should not be penalised for it because they just laid out possibilities and things that experimentalists would like to have.

    I don’t think the issue is whether to “penalize” people for working on SUSY, but rather just to acknowledge that the failure to find SUSY at the LHC has important scientific implications, providing significant evidence against heavily promoted speculative ideas about supersymmetric extensions of the SM and superstring unification.

  • To keep up on the latest hot trends in particle theory, and sometimes find an excuse for a trip down to Princeton, I periodically take a look at the IAS High Energy Theory Seminar listings (available here). I was surprised to see that this week they’ve invited Steve Hsu to give a talk about something having nothing to do with HEP theory. His topic is “Genomic Prediction of Complex Traits”, a topic motivated by his long-standing interest in finding genetic determinants of intelligence.

    I heard from Hsu back in 2011, when he wrote to ask me if I would publicize this study, which he wrote about here. Hsu was looking for volunteers of “high cognitive ability” (he thought most theoretical physicists would qualify), who would get “free genotyping and tools to explore their genomes”. You can read some more debate about this here. A few years back he wrote an essay for Nautilus about how genetic engineering will one day create the smartest humans who have ever lived.

    Until a year or so ago I used to follow Hsu’s blog, finally stopped after getting sick of reading his defences of Trump and Steve Bannon. Besides the interest in race determining intelligence (see for instance here), Hsu has what seems to me a disturbing interest in the politics of racial resentment, from the white/East Asian side, which shows up in his fondness for Trump/Bannon. This also shows up in his involvement in a campaign to get Harvard to stop its current affirmative action policies and admit more students of East Asian descent.

    No, I’m not going to allow any arguments over this topic in the comments section. Such arguments immediately descend to an extreme level of stupidity, whatever the IQ of those involved.

Update: I just noticed that Steve Hsu has a blog post about this here, which surely is a better place to discuss what he’s up to.

Update: The Economist has an excellent piece about the problems of HEP physics and status of some non-collider experiments looking for BSM physics.

Update: See here for part two of Jerry Alper’s report from the event discussed here.

Update: Perhaps the audience at Friday’s IAS HEP theory seminar can ask the speaker whether his methods for analyzing people’s genetics can be used to tell whether they come from a “shithole country” or not.

Posted in Uncategorized | 15 Comments