Latest on the Wormholes

I had thought that the wormhole story had reached peak absurdity back in December, but last night some commenters pointed to a new development: the technical calculation used in the publicity stunt was nonsense, not giving what was claimed. The paper explaining this is Comment on “Traversable wormhole dynamics on a quantum processor”, from a group led by Norman Yao. Yao is a leading expert on this kind of thing, recently hired by Harvard as a full professor. There’s no mention in the paper about any conversations he might have had with the main theorist responsible for the publicity stunt, his Harvard colleague Daniel Jafferis.

Tonight Fermilab is still planning a big public event to promote the wormhole, no news yet on whether it’s going to get cancelled. Also, no news from Quanta magazine, which up until now has shown no sign of understanding the extent they were taken in by this. Finally, no news from Nature about whether the paper will be retracted, and whether the retraction will be a cover story with a cool computer graphic of a non-wormhole.

Update: Dan Garisto goes through the Jafferis et al. paper, noting “Turns out it looked good only because they used an average (a fact not specified in the article).” and ending with

The unreported averages for the thermalization and teleportation signal make a stronger case for misconduct on the part of the authors.

I don’t understand why Fermilab was planning a public lecture promoting this, and with what has now come out, it should clearly be cancelled.

Update: I like the suggestion from Andreas Karch

Quanta magazine could make a video where the wormhole authors share in vivid detail the excitement they felt when they realized that their paper isn’t just overhyped but actually wrong.

Update: Garisto has a correction, explaining that the averaging is not the problem with Jafferis et al., rather that the teleportation signal is only there for the pair of operators involved in the machine language training, not there for other pairs of operators that should demonstrate the effect. In any case, best to consult the paper itself. If Jafferis et al. disagree with its conclusions, surely we’ll see an explanation from them soon.

Update: The Harvard Gazette promotes the wormhole publicity stunt, with “Daniel Jafferis’ team has for the first time conducted an experiment based in current quantum computing to understand wormhole dynamics.” As far as I can tell, that’s utter nonsense, with the result of the quantum computer calculation adding zero to our understanding of “wormhole dynamics”.

Update: Video of the Lykken talk now available, advertised by FNAL as Wormholes in the Laboratory.

Posted in Wormhole Publicity Stunts | 10 Comments

The Trouble With Path Integrals, Part II

This posting is about the problems with the idea that you can simply formulate quantum mechanical systems by picking a configuration space, an action functional S on paths in this space, and evaluating path integrals of the form
$$\int_{\text{paths}}e^{iS[\text{path}]}$$

Necessity of imaginary time

This section has been changed to fix the original mistaken version.
If one tries to do this path integral for even the simplest possible quantum field theory case (a non-relativistic free particle in one space dimension), the answer for the propagator in energy-momentum space is
$$G(E,p)=\frac {1}{E-\frac{p^2}{2m}}$$
Fourier transforming to real-time is ill-defined (the integration goes through the location of the pole at $E=\frac{p^2}{2m}$). Taking $t$ complex and in the upper half plane, for imaginary $t$ the Fourier transform is a well-defined integral. One gets the real-time propagator then by analytic continuation as a boundary value. For a relativistic theory one has
$$G(E,p)=\frac{1}{E^2-(p^2+m^2)}$$
and two poles (at $E=\pm \sqrt{p^2+m^2}$) to deal with. Again Fourier-transforming to real-time is ill-defined, but one can Fourier transform to imaginary time, then use this to get a sensible real-time propagator by analytic continuation.

Trying to do the same thing for Yang-Mills theory, again one gets something ill-defined for real time, with the added disadvantage of no way to actually calculate it. Going to imaginary time and discretizing gives a version of lattice gauge theory, with well-defined integrals for fixed lattice spacing. This is conjectured to have a well-defined limit at the lattice spacing is taken to zero.

Not an integral and not needed for fermions

Actual fundamental matter particles are fermions, with an action functional that is quadratic in the fermion fields. For these there’s a “path integral”, but it’s in no sense an actual integral, rather an interesting algebraic gadget. Since the action functional is quadratic, you can explicitly evaluate it and just work with the answer the algebraic gadget gives you. You can formulate this story as an analog of an actual path integral, but it’s unclear what this analogy gets you.

Phase space path integrals don’t make sense in general

Another aspect of the fermion action is that it has only one time derivative. For actions of this kind, bosonic or fermionic, the variables are not configuration space variable but phase space variables. For a linear phase space and quadratic action you can figure out what to do, but for non-linear phase spaces or non-quadratic actions, in general it is not clear how to make any sense of the path integral, even in imaginary time.

In general this is a rather complicated story (see some background in the part I post). For an interesting recent take on the phase-space path integral, see Witten’s A New Look At The Path Integral Of Quantum Mechanics.

Update: A commenter pointed me to this very interesting talk by Neil Turok. The main motivation that Turok explains at the beginning of the talk (and also in the Q and A afterwards) is exactly one that I share. He argues that the lesson of the the last 40 years is that one should not try and solve problems by making the Standard Model more complicated. All one needs to do is look more closely at the Standard Model itself and its foundations. If you do that, one thing you find is that there’s a “trouble with path integrals”. In Turok’s words, the problems with the path integral indicate that “the field is without foundations” and “nobody knows what they are doing”.

I do though very much part company with him over the direction he takes to try and get better foundations. He argues that you shouldn’t Wick rotate (analytically continue in time), but should complexify paths, analytically continuing in path space. For some problems doing the latter may be a better idea than doing the former, and in his talk he works out a toy QM calculation of this kind. But the model he studies (anharmonic oscillator) doesn’t at all prove that going to the imaginary time theory is a bad idea, for some calculations that works very well. He’s motivated by defining the path integral for gravity, where Euclidean quantum gravity is a problematic subject, but the gravitational version of the toy model I think will also be problematic. The ideas I’ve been pursuing involving the way the symmetries of spinors behave in Euclidean signature I think give a promising new way to think about this, and you won’t get that from just trying to complexify the conventional variables used to describe geometries.

Posted in Uncategorized | 30 Comments

The Trouble With Path Integrals, Part I

Two things recently made me think I should write something about path integrals: Quanta magazine has a new article out entitled How Our Reality May Be a Sum of All Possible Realities and Tony Zee has a new book out, Quantum Field Theory, as Simply as Possible (you may be affiliated with an institution that can get access here). Zee’s book is a worthy attempt to explain QFT intuitively without equations, but here I want to write about what it shares with the Quanta article (see chapter II.3): the idea that QM or QFT can best be defined and understood in term of the integral
$$\int_{\text{paths}}e^{iS[\text{path}]}$$
where S is the action functional. This is simple and intuitively appealing. It also seems to fit well with the idea that QM is a “many-worlds” theory involving considering all possible histories. Both the Quanta article and the Zee book do clarify that this fit is illusory, since the sum is over complex amplitudes, not a probability density for paths.

This posting will be split into two parts. The first will be an explanation of the context of what I’ve learned about path integrals over the years. If you’re not interested in that, you can skip to part II, which will list and give a technical explanation of some of the problems with path integrals.

I started out my career deeply in thrall to the idea that the path integral was the correct way to formulate quantum mechanics and quantum field theory. The first quantum field theory course I took was taught by Roy Glauber, and involved baffling calculations using annihilation and creation operators. At the same time I was trying to learn about gauge theory and finding that sources like the 1975 Les Houches Summer School volume or Coleman’s 1973 Erice lectures gave a conceptually much simpler formulation of QFT using path integrals. The next year I sat in on Coleman’s version of the QFT course, which did bring in the path integral formalism, although only part-way through the course. This left me with the conclusion that path integrals were the modern, powerful way of thinking, Glauber was just hopelessly out of touch, and Coleman didn’t start with them from the beginning because he was still partially attached to the out-of-date ways of thinking of his youth.

Over the next few years, my favorite QFT book was Pierre Ramond’s Field Theory: A Modern Primer. It was (and remains) a wonderfully concise and clear treatment of modern quantum field theory, starting with the path integral from the beginning. In graduate school, my thesis research was based on computer calculations of path integrals for Yang-Mills theory, with the integrals done by Monte-Carlo methods. Spending a lot of time with such numerical computations further entrenched my conviction that the path integral formulation of QM or QFT was completely essential. This stayed with me through my days as a postdoc in physics, as well as when I started spending more time in the math community.

My first indication there could be some trouble with path integrals I believe started in around 1988, when I learned of Witten’s revolutionary work on Chern-Simons theory. This theory was defined as a very simple path integral, a path integral over connections with action the Chern-Simons functional. What Witten was saying was that you could get revolutionary results in three-dimensional topology, simply by calculating the path integral
$$\int_{\mathcal A} e^{iCS[A]}$$
where the integration is over the space of connections A on a principal bundle over some 3-manifold. During my graduate student days and as a postdoc I had spent a lot of time thinking about the Chern-Simons functional (see unpublished paper here). If I could find a usable lattice gauge theory version of CS[A] (I never did…), that would give a way defining the local topological charge density in the four-dimensional Yang-Mills theory I was working with. Witten’s new quantum field theory immediately brought back to mind this problem. If you could solve it, you would have a well-defined discretized version of the theory, expressed as a finite-dimensional version of the path integral, and then all you had to do was evaluate the integral and take the continuum limit.

Of course this would actually be impractical. Even if you solved the problem of discretizing the CS functional, you’d have a high dimensional integral over phases to do, with the dimension going to infinity in the limit. Monte-Carlo methods depend on the integrand being positive, so won’t work for complex phases. It is easy though to come up with some much simpler toy-model analogs of the problem. Consider for example the following quantum mechanical path integral
$$\int_{\text {closed paths on}\ S^2} e^{i\frac{1}{2}\oint A}$$
Here $S^2$ is a sphere of radius 1, and A is locally a 1-form such that dA is the area 2-form on the sphere. You could think of A as the vector potential for a monopole field, where the monopole was inside the sphere.

If you think about this toy model, which looks like a nice simple version of a path integral, you realize that it’s very unclear how to make any sense of it. If you discretize, there’s nothing at all damping out contributions from paths for which position at time $t$ is nowhere near position at time $t+\delta t$. It turns out that since the “action” only has one time derivative, the paths are moving in phase space not configuration space. The sphere is a sort of phase space, and “phase space path integrals” have well-known pathologies. The Chern-Simons path integral is of a similar nature and should have similar problems.

I spent a lot of time thinking about this, one thing I wrote early on (1989) is available here. You get an interesting analog of the sphere toy model for any co-adjoint orbit of a Lie group G, with a path integral that should correspond to a quantum theory with state space the representation of G that the orbit philosophy associates to that orbit. Such a path integral that looks like it should make sense is the path integral for a supersymmetric quantum mechanics system that gives the index of a Dirac operator. Lots of people were studying such things during the 1980s-early 90s, not so much more recently. I’d guess that a sensible Chern-Simons path integral will need some fermionic variables and something like the Dirac operator story (in the closest analog of the toy model, you’re looking at paths moving in a moduli space of flat connections).

Over the years my attention has moved on to other things, with the point of view that representation theory is central to quantum mechanics. To truly play a role as a fundamental formulation of quantum mechanics, the path integral needs to find its place in this context. There’s a lot more going on than just picking an action functional and writing down
$$\int_{paths}e^{iS[\text{path}]}$$

Posted in Uncategorized | 8 Comments

What’s Going Right in Particle Physics

Since I had a little free time today, I was thinking of writing something motivated by two things I saw today, Sabine Hossenfelder’s What’s Going Wrong in Particle Physics, and this summer’s upcoming SUSY 2023 conference and pre-SUSY 2023 school. While there are a lot of ways in which I disagree with Hossenfelder’s critique, there are some ways in which it is perfectly accurate. For what possible reason is part of the physics community organizing summer schools to train students in topics like “Supersymmetry Phenomenology” or “Supersymmetry and Higgs Physics”? “Machine Learning for SUSY Model Building” encapsulates nicely what’s going wrong in one part of theoretical particle physics.

To begin my anti-SUSY rant, I looked back at the many pages I wrote 20 years ago about what was wrong with SUSY extensions of the SM in chapter 12 of Not Even Wrong. There I started out by noting that there were 37,000 or so SUSY papers at the time (SPIRES database). Wondering what the current number is, I did the same search on the current INSPIRE database, which showed 68,469 results. The necessary rant was clear: things have not gotten better, the zombie subject lives on, fed by summer schools like pre-SUSY 2023, and we’re all doomed.

But then I decided to do one last search, to check number of articles by year (e.g. search “supersymmetry and de=2020”). The results were surprising, and I spent some time compiling numbers for the following table:

These numbers show a distinct and continuing fall-off starting after 2015, the reason for which is clear. The LHC results were in, and a bad idea (SUSY extensions of the SM) had been torpedoed by experiment, taking on water and sinking after 20 years of dominance of the subject. To get an idea of the effect of the LHC results, you can read this 2014 piece by Joe Lykken and Maria Spiropulu (discussed here), by authors always ahead of the curve. No number of summer schools on SUSY phenomenology are going to bring this field back to health.

Going back to INSPIRE, I realized that I hadn’t needed to do the work of creating the above bar graph, the system does it for you in the upper left corner. I then started trying out other search terms. “string theory” shows no signs of ill-health, and “quantum gravity”, “holography”, “wormhole”, etc. are picking up steam, drawing in those fleeing the sinking SUSY ship. With no experimentalists to help us by killing off bad ideas in these areas, some zombies are going to be with us for a very long time…

Update: There are a couple things from Michael Peskin that are relevant to this that might be of interest: an extensive 2021 interview, and a very recent “vision statement” about particle physics. Peskin’s point of view I think is exactly what Hossenfelder is arguing against. He continues to push strongly for a very expensive near-term collider project, and doesn’t seem to have learned much of anything from a long career of working on failed ideas. I remember attending a 2007 colloquium talk here at Columbia where he put up a slide showing all the SUSY particles of masses a few hundred GeV that the LHC was about to find, and assured us that over the next 5-10 years we’d be seeing evidence of a WIMP from several different sources. According to the interview, in 2021 he was working on Randall-Sundrum models (see here) and wondering about the “little hierarchy problem” of why all the new physics was just above the LHC-accessible scale rather than at the electro-weak breaking scale. I very much agree with some of his vision statement (the Higgs sector is the part of the SM we don’t fully understand, and the argument for colliders is that they are the only way to study this), but his devotion to failed ideas (not just Randall-Sundrum, but string theory also) as the way forward is discouraging. In the interview he admits that it’s looking like post-LHC abandonment of particle physics is the most likely future:

I think if you were a betting man, you would bet that LHC will be the last great particle accelerator, and the whole field will dissipate after the LHC is over.

Most of his prominent colleagues have the same attitude that the subject is over, and have already moved on, unfortunately to things like creating wormholes in the lab.

Posted in Uncategorized | 19 Comments

Various and Sundry

A few things that may be of interest:

  • Fermilab is continuing to push the wormhole publicity stunt, with Joe Lykken, the lab’s Deputy director for research on the 17th giving a public lecture on Wormholes in the Laboratory. The promotional text goes way out of its way to mislead about the science:

    A wormhole, also known as an Einstein-Rosen bridge, is a hypothetical tunnel connecting remote points in spacetime. While wormholes are allowed by Albert Einstein’s theory of relativity, wormholes have never been found in the universe. In late 2022, the journal Nature featured a paper co-written by Joe Lykken, leader of the Fermilab Quantum Institute, that describes observable phenomenon produced by a quantum processor that “are consistent with the dynamics of a transversable wormhole.” Working with a Sycamore quantum computer at Google, a team of physicists was able to transfer information from one area of the computer to another through a quantum system utilizing artificial intelligence hardware.

    The “utilizing artificial intelligence hardware” seems to be an incoherent attempt to add more buzzwords to the bullshit. If you know anyone with any influence at the lab, you might want to consider contacting them and asking them to try and get this embarrassment canceled.

  • On another embarrassment to science front, Fumiharu Kato on Twitter is announcing the publication of the paperback edition of his book promoting the IUT proof of the abc conjecture. In his talk about this at the Simons Center he dealt with the issue of the problems with the proof by pretending they don’t exist, but (from what I can make out using Google Translate), he says he’ll deal with this in the paperback edition. His point of view seems to be that once PRIMS (chief editor S. Mochizuki) decided to accept the IUT papers, no one should be writing things like this. Perhaps he’s just trying to point out that this is potentially a huge embarrassment for PRIMS and RIMS in general, which is undeniable. But he appears to be going down the truly unfortunate path of making this not about mathematics but about Japanese national honor, with one tweet getting translated as:

    Some non-Japanese mathematicians questioned, “This is an insult to the Japanese mathematics world! Why don’t Japanese mathematicians say anything after being so insulted?” I also think the question is valid.

  • Continuing with the difficult and depressing, there’s the ongoing Russian war of aggression in Ukraine. The New York Times reports on the efforts of the Simons Foundation help sustain Ukrainian science. The Guardian has an excellent article on the problems the LHC experiments are having due to the fact that Russian physicists make up a significant part of the collaborations. I had heard this story back in September from John Ellis, who I met for the first time in London (at an event which now has a video of a discussion I was involved in). Tommaso Dorigo has an article about this on his blog, where he takes a point of view that is appealing (no borders or nationalism in science), but I don’t think it’s so simple.

    I’ve been wondering if there is a historical parallel to look to, with one possibility the situation in 1938-39 when Hitler invaded Czechoslovakia. By this point (as now) a lot of scientists had fled to the West, and the issue must have arisen of how scientists in the West should deal with their German colleagues who were staying in Germany.

  • Turning to something much more pleasant, Michael Harris points me to a video of a talk by Manjul Bhargava that has finally appeared, one of a series of talks at a 2018 conference in honor of Barry Mazur.
  • This week in my graduate class I’m talking a bit about Howe duality, and just discovered that his original unpublished articles on the subject are now available online, see here and here.
  • Finally, I only recently learned about the volume of Sidney Coleman’s correspondence that recently appeared, under the title Theoretical Physics in Your Face. Especially fun to read for those like me who remember the era at Harvard when Coleman was at the center of activity. One quote, his opinion in 1985 evaluating the grant to the Princeton theory group:

    If I have any serious criticism of this group at all, it is that their recent concentration on superstrings seems to me a tactical error, too much devotion of effort to a line of development that (at least to an outsider’s eye) is not that promising. However, I could well be wrong in this, and, even if I am right, they’ll soon discover they’ve drilled a dry hole and be off exploring other fields next year.

    Unfortunately the last part of this was very wrong…

Update: Nature has an article on the resolution of the LHC Russian authorship issue.

Posted in Uncategorized, Wormhole Publicity Stunts | 9 Comments

This Week’s Hype

The New York Times today has Where is Physics Headed (and How Soon Do We Get There?). It’s an interview by Dennis Overbye of Maria Spiropulu and Michael Turner, the chairs of the NAS Committee on Elementary Particle Physics – Progress and Promise. This committee is tasked with advising the DOE and NSF so they can “make informed decisions about funding, workforce, and research directions.”

The transcript of the interview is rather bizarre, for several reasons. Spiropulu, probably the main person responsible for the recent wormhole publicity stunt, is here the voice of sober reason:

Overbye: String theory — the vaunted “theory of everything” — describes the basic particles and forces in nature as vibrating strings of energy. Is there hope on our horizon for better understanding it? This alleged stringiness only shows up at energies millions of times higher than what could be achieved by any particle accelerator ever imagined. Some scientists criticize string theory as being outside science.

Spiropulu: It’s not testable.

whereas Turner (an astronomer astrophysicist with no particular background in mathematics) is a big fan of string theory as mathematics:

Turner: But it is a powerful mathematical tool. And if you look at the progress of science over the past 2,500 years, from the Milesians, who began without mathematics, to the present, mathematics has been the pacing item. Geometry, algebra, Newton and calculus, and Einstein and non-Riemannian geometry.

We will have to wait and see what comes from string theory, but I think it will be big.

On the topic of particle physics and unification, there’s

Overbye: You’re referring to Grand Unified Theories, or GUTs, which were considered a way to achieve Einstein’s dream of a single equation that encompassed all the forces of nature. Where are we on unification?

Spiropulu: The curveball is that we don’t understand the mass of the Higgs, which is about 125 times the mass of a hydrogen atom.

When we discovered the Higgs, the first thing we expected was to find these other new supersymmetric particles, because the mass we measured was unstable without their presence, but we haven’t found them yet. (If the Higgs field collapsed, we could bubble out into a different universe — and of course that hasn’t happened yet.)

That has been a little bit crushing; for 20 years I’ve been chasing the supersymmetrical particles. So we’re like deer in the headlights: We didn’t find supersymmetry, we didn’t find dark matter as a particle.

Turner makes the case one often hears these days from string theorists: the field may have given up on unification, but it has moved on to something much less boring:

Turner: I feel like things have never been more exciting in particle physics, in terms of the opportunities to understand space and time, matter and energy, and the fundamental particles — if they are even particles. If you asked a particle physicist where the field is going, you’d get a lot of different answers.

But what’s the grand vision? What is so exciting about this field? I was so excited in 1980 about the idea of grand unification, and that now looks small compared to the possibilities ahead.

Turner: The unification of the forces is just part of what’s going on. But it is boring in comparison to the larger questions about space and time. Discussing what space and time are and where they came from is now within the realm of particle physics.

From the perspective of cosmology, the Big Bang is the origin of space and time, at least from the point of view of Einstein’s general relativity. So the origin of the universe, space and time are all connected. And does the universe have an end? Is there a multiverse? How many spaces and times are there? Does that question even make sense?

Spiropulu: To me, by the way, unification is not boring. Just saying.

The problem with the idea that we’ve moved on to a new, far more exciting time in physics, devoted to replacing conventional space-time and exploring the multiverse, is that there’s no actual way to do experiments about any of this (other than the wormholes…). If this is the vision of the coming NAS report, a possible response from the DOE and NSF may be “That’s nice, we can now shut down all those expensive labs and experiments doing the boring stuff and focus on investigating the wormholes that Google’s quantum computer is producing for us.”

Update: Ars Technica has some refreshing anti-hype: Requiem for a string: Charting the rise and fall of a theory of everything, with subtitle “String theory was supposed to explain all of physics. What went wrong?”

It’s a good explanation of what went wrong with string theory, although one might point out that pretty much the same story was first explained by others in detail 20 years ago.

There definitely seems to be a recent trend in popular science articles to finally admit that string theory unification may simply be a failed and now dead idea. This article gets brutal at times:

The dearth of evidence has slaughtered so many members of the supersymmetric family that the whole idea is on very shaky ground, with physicists beginning to have conferences with titles like “Beyond Supersymmetry” and “Oh My God, I Think I Wasted My Career.”

Posted in This Week's Hype | 12 Comments

What is the AdS/CFT Conjecture?

In recent years I’ve found there’s no point to trying to have an intelligible argument about “string theory”, simply because the term no longer has any well-defined meaning. At the KITP next spring, there will be a program devoted to What is String Theory?, with a website that tells us that “the precise nature of its organizational principle remains obscure.” As far as I can tell though, the problem is not one of insufficient precision, but not knowing even the general nature of such an organizational principle.

What one hears when one asks about this these days is that the field has moved on to focusing on the one part of this that is understood: the “AdS/CFT conjecture.” I’ve gotten the same answer when asking about the meaning of the “ER=EPR conjecture”, and recently the claim seems to be that the black hole information paradox is resolved, again, somehow using the “AdS/CFT conjecture.” Today I noticed this twitter thread from Jonathan Oppenheim raising questions about the “AdS/CFT conjecture” and the discussion there reminded me that I don’t understand what the people involved mean by those words. What exactly (physicist meaning of “exactly”, not mathematician meaning) is the “AdS/CFT conjecture”?

To be clear, I have tried to follow this subject since its beginnings, and at one point was pretty well aware of the exact known statements relating type IIB superstring theory on five-dim AdS space times a five-sphere with M units of flux to N=4 U(M) SYM. While this provided an impressive realization of the old dream of relating a large M QFT to a weakly coupled string theory, it bothered me that there was no meaning to the duality in the sense that no one knew how to define the strongly coupled string theory. This problem seemed to get dealt with by turning the conjecture into a definition of string theory in this background, but it was always unclear how that was supposed to work.

So, my question isn’t about that, but about the much more general use of the term to refer to all sorts of gravity/CFT relationships in various dimensions. There are hundreds if not thousands of theorists actively working on this these days, and my question is aimed at them: what exactly do you mean when you say “the AdS/CFT conjecture”?

Update: The ongoing discussion between Jonathan Oppenheim, Geoff Pennington and Andreas Karch about this on Twitter is very interesting, indicates that it isn’t so clear exactly what “the AdS/CFT conjecture” is. For me and presumably many others, would be great to have a source for an authoritative discussion of what is known about this topic. The Twitter format is very much not optimal for discussions like this.

Posted in Uncategorized | 31 Comments

Spring Course

This semester I’m teaching the second half of our graduate course on Lie groups and representations, and have started making plans for the course, which will begin next week. There’s a web-page here which I’ll be adding to as time goes on. The plan is to try and write up lecture notes for most of the course, some of which may be rewrites of older notes written when I taught earlier versions of this course. I’ll post these typically after the corresponding lectures. Any corrections or comments will be welcome.

This year I’m hoping to integrate ideas about “quantization” into the course more than in the past, starting off with the mathematics behind what physicists often call “canonical quantization”. This topic is worked out very explicitly and in great detail in this book, but in this course I’ll be giving a more stream-lined presentation from a more advanced point of view. This subject has a somewhat different flavor than usual for math graduate courses, in that instead of proving things about classes of representations, it’s one very specific representation that is the topic.

This topic is also the simplest example of the general philosophy of trying to understand Lie group representations in terms of the geometry of a “co-adjoint orbit”, and I’ll try and say a bit about this “orbit philosophy” and “geometric quantization”.

The next topic of the course will likely be more standard: the classification of finite dimensional representations of semi-simple complex Lie algebras (or, equivalently, compact Lie groups), and their construction using Verma modules. For this topic it’s hard to justify spending a lot of time writing notes, since there already are several places this has been done very well (e.g Kirillov’s book). After doing this algebraically, I’ll go back to the geometric and orbit point of view and explain the Borel-Weil-Bott theorem giving a geometric construction of these representations.

For the last part of the course, I hope to discuss the representations of SL(2,R) and the general classification of real semi-simple Lie algebras and groups. If I ever manage to understand what’s going on with the real Weil group and the Langlands classification of representations of real Lie groups, maybe I’ll say something about that, but that is probably too much to hope for.

Throughout the course, as well as the relation to quantization, I also hope to explain some things about relations to number theory. These would include the theory of theta functions early on, modular forms at the end.

Posted in Uncategorized | 5 Comments

Pierre Schapira on Récoltes et Semailles

Earlier this year I bought a copy of the recently published version of Grothendieck’s Récoltes et Semailles, and spent quite a lot of time reading it. I wrote a bit about it here, intended to write something much longer when I finished reading, but I’ve given up on that idea. At some point this past fall I stopped reading, having made it through all but 100 pages or so of the roughly 1900 total. I planned to pick it up again and finish, but haven’t managed to bring myself to do that, largely because getting to the end would mean I should write something, and the task of doing justice to this text looks far too difficult.

Récoltes et Semailles is a unique and amazing document, some of the things in it are fantastic and wonderful. Quoting myself from earlier this year

there are many beautifully written sections, capturing Grothendieck’s feeling for the beauty of the deepest ideas in mathematics. One gets to see what it looked like from the inside to a genius as he worked, often together with others, on a project that revolutionized how we think about mathematics.

A huge problem with the book is the way it was written, providing a convincing advertisement for word processors. Grothendieck seems to have not significantly edited the manuscript. When he thought of something relevant to what he had written previously, instead of editing that, he would just type away and add more material. Unclear how this could ever happen, but it would be a great service to humanity to have a competent editor put to work doing a huge rewrite of the text.

The other problem though is even more serious. The text provides deep personal insight into Grothendieck’s thinking, which is simultaneously fascinating and discouraging. His isolation and decision to concentrate on “meditation” about himself left him semi-paranoid and without anyone to engage with and help channel his remarkable intellect. It’s frustrating to read hundreds of pages about motives which consist of some tantalizing explanations of these deep mathematical ideas, embedded in endless complaints that Deligne and others didn’t properly understand and develop these ideas (or properly credit him). One keeps thinking: instead of going on like this, why didn’t he just do what he said he had planned earlier, write out an explanation of these ideas?

As an excuse for giving up on writing more myself about this, I can instead recommend Pierre Schapira’s new article at Inference, entitled A Truncated Manuscript. Schapira provides an excellent review of the book, and also explains a major problem with it. Grothendieck devotes endless pages to complaints that Zoghman Mebkhout did not get sufficient recognition for his work on the so-called Riemann-Hilbert correspondence for perverse sheaves. Mebkhout was Schapira’s student, and he explains that a correct version of the story has the ideas involved originating with Kashiwara, who was the one who should have gotten more recognition, not Mebhkout. According to Schapira, he explained what had really happened to Grothendieck, who wrote an extra twenty pages or so correcting mistaken claims in Récoltes et Semailles, but these didn’t make it into the recently published version. If someone ever gets to the project of editing Récoltes et Semailles, a good starting point would be to simply delete all of the material that Grothendieck included on this topic.

The extra pages described are available now here, as part of an extensive website called the Grothendieck Circle, now being updated by Leila Schneps. For a wealth of material concerning Grothendieck’s writings, see this site run by Mateo Carmona. It includes a transcription of Récoltes et Semailles that provides an alternative to the recently published version.

The Schapira article is a good example of some of the excellent pieces that the people at Inference have published since they started nearly ten years ago (another example relevant to Grothendieck would be Pierre Cartier’s A Country Known Only by Name from their first issue). I’ve heard news that they have lost a major part of their funding, which was reportedly from Peter Thiel and was one source of controversy about the magazine. I wrote about this here in early 2019 (also note discussion in the comments). My position then and now is that the concerns people had about the editors and funding of Inference needed to be evaluated in the context of the result, which was an unusual publication putting out some high quality articles about math and physics that would likely not have otherwise gotten written and published. I hope they manage to find alternate sources of funding that allow them to keep putting out the publication.

Posted in Uncategorized | 8 Comments

The New Yorker and the Publicity Stunt

The wormhole publicity stunt story just keeps going. Today an article about the Google Santa Barbara lab and quantum computer used in the publicity stunt appeared in the New Yorker. One of the main people profiled is Hartmut Neven, the lab founder and a publicity stunt co-author. He is described as follows:

Neven, originally from Germany, is a bald fifty-seven-year-old who belongs to the modern cast of hybridized executive-mystics. He talked of our quantum future with a blend of scientific precision and psychedelic glee. He wore a leather jacket, a loose-fitting linen shirt festooned with buttons, a pair of jeans with zippered pockets on the legs, and Velcro sneakers that looked like moon boots. “As my team knows, I never miss a single Burning Man,” he told me.

The article explains what has been going on at the Google lab under Neven’s direction:

in the past few years, in research papers published in the world’s leading scientific journals, he and his team have also unveiled a series of small, peculiar wonders: photons that bunch together in clumps; identical particles whose properties change depending on the order in which they are arranged; an exotic state of perpetually mutating matter known as a “time crystal.” “There’s literally a list of a dozen things like this, and each one is about as science fictiony as the next,” Neven said. He told me that a team led by the physicist Maria Spiropulu had used Google’s quantum computer to simulate a “holographic wormhole,” a conceptual shortcut through space-time—an achievement that recently made the cover of Nature.

There are some indications given that the wormholes aren’t everything you’d like a wormhole to be:

Google’s published scientific results in quantum computing have at times drawn scrutiny from other researchers. (One of the Nature paper’s authors called their wormhole the “smallest, crummiest wormhole you can imagine.” Spiropulu, who owns a dog named Qubit, concurred. “It’s really very crummy, for real,” she told me.) “With all these experiments, there’s still a huge debate as to what extent are we actually doing what we claim,” Scott Aaronson, a professor at the University of Texas at Austin who specializes in quantum computing, said. “You kind of have to squint.”

I took another look at the Nature article and realized that at the end it has a section explaining the contributions of each author (I’ll reproduce the whole thing as an appendix here). For Neven it has

Google’s VP Engineering, Quantum AI, H.N. coordinated project resources on behalf of the Google Quantum AI team.

Two physicists profiled are John Preskill and Alexei Kitaev. Academia in this field is seeing a big impact of quantum computing jobs and funding. According to the article:

Preskill and Kitaev teach Caltech’s introductory quantum-computing course together, and their classroom is overflowing with students. But, in 2021, Amazon announced that it was opening a large quantum-computing laboratory on Caltech’s campus. Preskill is now an Amazon Scholar; Kitaev remained with Google. The two physicists, who used to have adjacent offices, today work in separate buildings. They remain collegial, but I sensed that there were certain research topics on which they could no longer confer.

Someone told me that the Amazon lab where Preskill works has postdoc-type positions for theoretical physicists, salary about 250K.

Theoretical physics hype and quantum computing hype come together prominently in the article. Besides Shor’s algorithm and its implications for cryptography, here’s the rest of what quantum computers promise:

A quantum computer could open new frontiers in mathematics, revolutionizing our idea of what it means to “compute.” Its processing power could spur the development of new industrial chemicals, addressing the problems of climate change and food scarcity. And it could reconcile the elegant theories of Albert Einstein with the unruly microverse of particle physics, enabling discoveries about space and time.

How long until quantum computers unify GR and the Standard Model? We just need better, fault-tolerant, qubits, and then:

A thousand fault-tolerant qubits should be enough to run accurate simulations of molecular chemistry. Ten thousand fault-tolerant qubits could begin to unlock new findings in particle physics.

The hype here is far hypier than any of the string theory hype I’ve been covering over the years, and it looks like it’s got a lot more money and influence behind it, so will be a major force driving the field in coming years and decades.

Appendix:

The Nature contributions section is:

J.D.L. and D.J. are senior co-principal investigators of the QCCFP Consortium. J.D.L. worked on the conception of the research program, theoretical calculations, computation aspects, simulations and validations. D.J. is one of the inventors of the SYK traversable wormhole protocol. He worked on all theoretical aspects of the research and the validation of the wormhole dynamics. Graduate student D.K.K.47 worked on theoretical aspects and calculations of the chord diagrams. Graduate student S.I.D. worked on computation and simulation aspects. Graduate student A.Z.48 worked on all theory and computation aspects, the learning methods that solved the sparsification challenge, the coding of the protocol on the Sycamore and the coordination with the Google Quantum AI team. Postdoctoral scholar N.L. worked on the working group coordination aspects, meetings and workshops, and follow-up on all outstanding challenges. Google’s VP Engineering, Quantum AI, H.N. coordinated project resources on behalf of the Google Quantum AI team. M.S. is the lead principal investigator of the QCCFP Consortium Project. She conceived and proposed the on-chip traversable wormhole research program in 2018, assembled the group with the appropriate areas of expertise and worked on all aspects of the research and the manuscript together with all authors.

Update: John Horgan’s take on the stunt is Physicists Teleport Bullshit Through a Wormhole.

Update: Quanta is still actively promoting their story that

Physicists have purportedly created the first-ever wormhole, a kind of tunnel theorized in 1935 by Albert Einstein and Nathan Rosen that leads from one place to another by passing into an extra dimension of space.

with nothing on their site that indicates that this story has gotten an almost universally negative reaction from knowledgeable scientists. The one exception I’ve seen is Lubos Motl, who comments on the Quanta story. Lubos also engaged in a long exchange here with Matt Strassler. I had been worried about how Lubos was doing, it’s reassuring to see that he’s still out there, still himself, and still reliably defending the most indefensible products of string theory hype in his characteristic style.

Update: A surprising number of theorists seem willing to help hype the publicity stunt. See for instance this from Penn. It seems to me full of misinformation, for instance

Heckman: In fact, the entire experiment could have been done on a classical machine; it just would have taken a lot more time.

In fact, the “experiment” was actually done on a classical computer, in a very short amount of time. For studying this kind of model, classical computers are hugely faster and more capable than any current quantum computer. Here they were used to search for a simplified version of the real problem that was easy enough that it could be done by the quantum computer.

Update: Dan Garisto on his blog has an article about the wormhole fiasco, originally intended for publication in SciAm. Garisto explains that SciAm was originally not taken in by the publicity stunt:

I should tell you, when my editor at Scientific American sent me the embargoed press release on Thanksgiving with the subject line “Hmmm,” I responded dubiously. “They used 9 qubits! What the hell could that possibly tell you?” We chose not to cover it—initially.

He also gives a correct description of what this really was:

…the supposed bombshell dropped by the Nature paper is a calculation of an approximation of a model conjectured to be equivalent to a lower-dimensional gravity for a universe that isn’t ours. 

and challenges the misleading messaging from the scientists and from Quanta:

Quantum message discipline is sorely needed. Clarifying their headline change, Quanta noted that “The researchers behind the new work — some of the best-respected physicists in the world — frequently and consistently described their work to us as ‘creating a wormhole.’” 

When reached for comment, the study’s co-leader Maria Spiropulu pointed to a Caltech FAQ in her team’s defense. “Did we claim to have produced or observed a wormhole deformation of 3+1-dimensional spacetime?” the FAQ asks. The answer it then offers is a firm “No.” The FAQ further elaborates that what the researchers saw was only “consistent with the dynamics of a traversable wormhole.” The Nature paper is less absolute. There, Spiropulu and her co-authors describe their research as “the experimental realization of traversable wormhole dynamics on nine qubits” and discuss how they used a quantum teleportation protocol to “insert another qubit across the wormhole from right to left.” 

And in a video produced ahead of publication by Quanta, several researchers spoke gushingly, saying, “This is a wormhole” and “I think we have a wormhole, guys.” This is at best, deeply misleading.

He also has a quote from Natalie Wolchover which tries to explain the point of view which she (and others like Susskind) take that a quantum computer calculation of a toy mathematical model of a physical system is somehow really creating and doing a laboratory experiment on the physical system being modeled:

Natalie Wolchover, a Pulitzer Prize-winning science writer at Quanta, argues that when a quantum computer simulates a toy model of quantum matter, such as the SYK model, it is really “creating” the quantum system it asks about. “It’s profound but somehow I can’t put my finger on what it means about the difference between ‘real’ and ‘simulated,’” she wrote in an email.

This is complete nonsense, as Scott Aaronson tried to explain in the quote he gave for the NYT article about this:

If this experiment has brought a wormhole into actual physical existence, then a strong case could be made that you, too, bring a wormhole into actual physical existence every time you sketch one with pen and paper.

Posted in Wormhole Publicity Stunts | 29 Comments