Some math-research items:
- Mura Yakerson has been doing a really wonderful series of interviews with mathematicians, available at her math-life balance web-page or Youtube channel. I’ve just started listening to some of them, including ones with Peter Scholze and Dustin Clausen (Clausen is John Tate’s grandson, the latest AMS Notices has a memorial article).
- There’s a remarkable report out from Peter Scholze about the progress of the Liquid Tensor experiment. Back when I first heard about this, I figured it was a clever plot by Scholze to get other people to help with a very complicated part of a proof, by getting them to work out the details, with the excuse being that they would be doing a computer check of the proof. Seemed to me very unlikely you could check such a proof with a computer, but that by forcing humans to try to disambiguate things carefully enough in preparation for a computer proof, he’d get a human-checked proof. Looks like I was wrong.
- For yet more Scholze news, the Fields Medal symposium this year will be devoted to his work.
- Trying to find something of interest in math, that wasn’t Scholze-related, I noticed this site devoted to the case of Azat Miftakhov, where there will be an online Azat Miftakhov Day program. Foiled though on the Scholze front, since he’s a speaker there, talking about Condensed Mathematics.
- The list of those giving plenary lectures at next years ICM is here.
Update: Kevin Hartnett at Quanta has a good new article up about quantum field theory and mathematics (an inexhaustible topic…)
Update: Also from the Simons Foundation, there’s a wonderful profile of my Columbia colleague Andrei Okounkov, who has been very active in bringing together mathematics and ideas from quantum field theory.
Update: Nature has a story about the Liquid Tensor Experiment.
I like Scholze’s riff on the metal band Liquid Tension Experiment, which was formed back in 1997
Let me note that the big news in the list of sections for the ICM is that there are two new Applied Math sections for invited lectures:
17. Statistics and Data Analysis, with 8–11 talks (split from Probability and Statistics, and with a lot more talks than statistics used to get)
and
18. Stochastic and Differential Modelling, with 4–6 talks (a completely new section).
A lot of the other sections had their targets reduced by one talk to make room, although I think they’re also expanding the number of invited lectures a little. I knew something like this might be coming (but didn’t think to check it until I saw your blog post).
Here is a question from someone whose knowledge of quantum field theory is rudimentary. Does anyone know how the new work on making quantum field theory rigorous discussed in the Quanta article fits in with previous work in this area, such as axiomatic field theory a la Wightman, algebraic quantum field theory a la Haag (this is mentioned briefly), or constructive field theory?
I second Mark Hillery’s question.
Also — do Dijkgraaf’s comments in the Quanta article seem somewhat in tension with his recent talk and earlier article? If successfully mathematizing QFT will lead us to new fundamental physics, that seems like quite another project than just using the physics we have to analyze more and more complicated things, I should think.
Talagrand’s book, What Is a Quantum Field Theory? A First Introduction for Mathematicians, is planned for March 2022
https://www.cambridge.org/in/academic/subjects/mathematics/mathematical-physics/what-quantum-field-theory-first-introduction-mathematicians?format=HB&isbn=9781316510278
Mark Hillery,
Much of the new work discussed is based on perturbation theory, in principle providing a new mathematical framework for renormalized perturbation theory in QFT, one more adapted to things like working on arbitrary manifolds, not just flat space-time. From the point of view of mathematicians, what has always been a goal is to have a mathematically well-defined framework for describing the QFTs physicists have used to get new results in topology, i.e. topological quantum field theories. This new work to some extent gives that, but so far just for perturbative theories, while the most dramatic TQFT results use non-perturbative information.
The kinds of approaches you mention are motivated by a desire to have a rigorous non-perturbative quantum field theory. The Wightman axioms crucially use Poincare symmetry and Minkowski signature, while the Costello stuff is set up to work on arbitrary manifolds with no Poincare symmetry, and with Euclidean signature. So, here there’s not a lot of overlap. For the relation to AQFT, see
https://arxiv.org/abs/1711.06674
The results there are pretty limited, this paper is just about free field theory.
For constructive QFT, again the whole point is to try and construct something outside of perturbation theory. The state of the art there has always been lattice regularized QFT, with the problem that of how to take the continuum limit. Unfortunately I don’t know of a lot of work or progress on this recently.
S,
To a large extent, theorists studying QFT have given up on trying to understand more about the QFT that describes fundamental physics (because the problem is too hard), and instead work on QFTs for which new ideas may give some results, but the physical relevance of these QFTs is typically in condensed matter physics.
For quantum gravity, this takes the form of people mainly studying low dimensional toy models. Whether these will tell us about 4d quantum gravity is quite speculative, but often they have direct relevance to very different subjects, e.g quantum information theory, or condensed matter problems (where the low dimension is the relevant one).
This may provide some context for reconciling Dijkgraaf’s different-sounding comments.
Public lecture notes of a course by Sourav Chatterjee, based on Michel Talagrand’s QFT book are at
https://statweb.stanford.edu/~souravc/qft-lectures-combined.pdf
On the mathematical proof of the “DOZZ” formula (3-point function/OPE coefficient) for 2D Liouville field theory
https://www.quantamagazine.org/mathematicians-prove-2d-version-of-quantum-gravity-really-works-20210617/
I made a MathOverflow post back in April 2020, where among other things I said (regarding Mochizuki’s abc proof), “I do think that it is reasonable for skeptics of the proof to request that those who claim to understand the proof, and who want to cultivate a whole new generation of younger mathematicians to pursue IUT, to formalize the proof in a proof assistant.”
I tried to post a similar comment here on Not Even Wrong, but it didn’t get published, perhaps because Peter thought the comment was silly. In light of how the Liquid Tensor Experiment has played out (which doesn’t surprise me at all except that I expected it would take about a year, rather than six months, to get to this point), perhaps the suggestion doesn’t sound as silly any more.
@Timothy Chow
at the very least, such a move would necessitate trimming all the cruft from the definitions and reducing to the actual key lemma of interest, which is some statement about monoid actions (as opposed to a much more complicated statement about Frobenioids). And I don’t mean Corollary 3.12, but the key sticking point inside that the claimed proof of it.
This is what Scholze did for the LTE, isolating the key technical proposition that doesn’t really involve condensed abelian groups at all, and how far the formalisation has actually gotten.
The only problem is that one needs a community of people large and invested enough (and experienced in formalisation) to divide up the labour and do it. And a person who understands all the proof being willing to give precise technical advice on the question, as Scholze supplied to the formalisation team. The forthcoming clarifications for IUT have so far not been, to my eyes, of that nature, but more like mathematical koans.
@DavidRoberts
I agree with what you say. I should perhaps amend the word “skeptics” to “funding agencies” (if you click through to MathOverflow, I explain this point in more detail). From the standpoint of a funding agency who believes that Mochizuki is acting in good faith, Mochizuki should be able to play the role that Scholze did. And the funding agency can help with assembling the formalization team, which I imagine would comprise grad students and postdocs who are on good terms with Mochizuki.
Timothy Chow,
I don’t remember, but likely didn’t post your comment about abc because the issue of formalizing Mochizuki’s proof did seem irrelevant for two reasons:
1. It seemed to me highly unlikely that this was within the realm of possibility, even if one thought of the situation as involving a conventional but very complex proof, with the author of the proof behaving conventionally.
2. Going on about proof assistants just obscures the actual problem: Mochizuki refuses to engage with Stix and Scholze in the standard way a mathematician is supposed to respond to those challenging a claimed proof. Instead of providing more details for and clarifying challenged arguments, he refused to significantly add anything to his original manuscript, issuing instead documents which convincingly destroy his own credibility. In addition, it looks like no one around him is able to provide such details and clarifications.
Given the Liquid Tensor Experiment success, I agree my judgment about 1. was flawed. But I still think 2. is the relevant point, and discussing proof assistants in the context of the problem with Mochizuki’s proof is a red herring. The problem here isn’t the kind of thing Scholze was struggling with: an argument so complex that the best human mathematicians might miss a subtle problem. Yes, there a proof assistant might be the answer. But the problem here is a human one of a very different kind.
Peter, I absolutely agree that 2 is the relevant point; I absolutely agree with your characterization of the sociological situation; and I absolutely agree that the problem is a human one of a very different kind from the Liquid Tensor Experiment. But I disagree that “going on about proof assistants obscures the actual problem.” Quite the opposite! The point I am making about proof assistants is that, under the assumption that they are sufficiently easy to use (point 1), they are ideally suited for addressing the actual (human, sociological) problem.
Abstractly, we have a small Community A that insists on the correctness of a certain proof, and a much larger Community B that is unconvinced. For sociological reasons, the usual mechanisms for resolving mathematical disagreements have broken down. Community A insists that it has provided full details and that Community B is behaving in a disrespectful manner and refusing to acknowledge the plain truth. Community B, of course, says the same about Community A. The point is that the existence of proof assistants means that Community B can fairly say to Community A, “You need to explain your proof to a proof assistant before we’ll give you more money.” If Community A’s claims are true, then there should be no reason why Community A can’t formalize the proof. The key point is that a proof assistant, being a dumb machine, cannot be accused of being disrespectful or failing to conform to some human standards of decorum.
If Community A is right, and produces a formalization, then the formalized proof should convince Community B. (I’m assuming here that the two communities can at least agree on a proof assistant that they both trust, and that they can agree on a statement that is simple enough for there to be no debate about whether the formalized statement correctly captures the disputed claim. These assumptions are reasonable in this case, I think.) On the other hand, if Community A fails to produce a formalized proof given a reasonable amount of time and money, then it’s going to become increasingly difficult for them to explain what the holdup is.
The proof assistant allows a funding agency to withhold funding without having to engage in a potentially unpleasant confrontation with Community A. If it so desires, the agency can insist that it is totally on the side of Community A, and that Community B is a bunch of evil morons; its pen is poised to sign the check as soon as Community A satisfies this small bureaucratic requirement of producing a formalization. Crucially, the funding agency requires very little technical expertise to play its role in this drama.
Timothy Chow,
I guess I’m just inherently skeptical about claims that the solution to human problems is “computers”. A colleague of mine likes to claim that mathematics is the only academic subject where, when two people disagree about something, they go into a room to discuss it, and emerge with one of them saying “I was wrong”. This is an exaggeration, but it’s an important ideal to try and maintain. The problem here is a violation of community norms designed to uphold that ideal, and I don’t think computers can replace that ideal.
It does look like proof assistants can be very helpful when an argument is so complicated that no one is sure whether it is right or not. This latest is a success of this kind, another example would be the Flyspeck project in the case of the Kepler conjecture. Cases where experts disagree about whether an argument is right should be resolved by more discussion, until one side gets the other to see where they went wrong. If people can’t do this, they’re not in any position to start on the much harder project of getting a computer to resolve their differences.
There’s nothing magical about the proof assistant being a computer; any competent neutral third party would do. I claim that from time immemorial, many human disputes of all kinds have been settled by the intervention of an impartial mediator that both sides trust. Surely you’re not inherently skeptical of that claim? The only special thing here is that the neutral third party happens to be a machine, and is therefore indisputably impartial.
I have discussed this dispute resolution procedure with a variety of people with different amounts of technical training, and just about everyone immediately understands it and is puzzled why the mathematical community doesn’t follow it (I have to explain that proof assistants are much more cumbersome to use than they might think). The few people who have balked have been mathematicians. It initially puzzled me why the principle that a neutral mediator can aid in dispute resolution—a principle which is obvious to almost everyone on the planet—would be so difficult for mathematicians to grasp, but I think I understand why. It relates to the point you just made, that mathematicians have been spoiled by the scarcity of serious disputes in our field (when it comes to mathematical correctness, at least). In every other field, disputes are a dime a dozen, and so everyone intuitively understands what methods are available to address them. Mathematicians, on the other hand, know of only one method: talk it out. Since that almost always works, we never bother to think about a Plan B for cases where it doesn’t work. If talking it out doesn’t work, we have no idea what to do.
Another contributing factor, I think, is that mathematicians tend to be surprisingly slow to catch on to what computers can do for them. When the Appel–Haken–Koch proof of the four-color theorem came out, did mathematicians react by saying, “This is awesome! What other computer-assisted proofs can we come up with?” Some may have, but by and large, the initial reaction of most mathematicians was to either ignore the result or complain about the role of the computer. A similar dynamic is happening with proof assistants. For example, in Scholze’s blog post, he wrote, “Initially, I imagined that the first step would be that a group of people study the whole proof in detail and write up a heavily digested version, broken up into many many small lemmas, and only afterwards start the formalization of each individual lemma.” Experts in formalization have long known that proof assistants have progressed far beyond that stage, but the news has been slow to seep into the general consciousness, and I think it’s partly because most mathematicians have incorrect preconceived notions about proof assistants that cause them to resist them rather than milk them for everything they’re worth.
You say that invoking a proof assistant to help resolve differences is a “much harder project” than resolving them by discussion. That is true under normal circumstances, but going on about this fact obscures the actual problem, which is that the conventional method of human discussion has broken down in this case. I’m totally with you that calm discussion is the way disagreements should be resolved, but what if that method fails? Then the inequality flips. Human discussion becomes infinitely hard, and appealing to a proof assistant becomes potentially easier.
Note that I’m just as skeptical as you are about Mochizuki’s proof. I don’t expect a formalized proof of abc to emerge, for the simple reason that I doubt that “Community A” really has a proof. But that doesn’t matter. What matters is the conditional statement that if there is a proof then it should be formalizable in a proof assistant. Now you might counter that if there is a proof then it should be explainable to Community B, and that is true under normal circumstances, but what Community A will say is that Community B is being stubborn and disrespectful. Again, the value of a proof assistant is that it cannot be accused of being disrespectful.
One final point. I don’t claim that a proof assistant will completely solve the sociological breakdown by getting Community A and Community B onto the same page. If Community A does not really have a proof, then unifying the two communities probably won’t happen. But what the proof assistant can do in that case is to make it clear to the whole world that Community B is right and Community A is being unreasonable. At the moment, it’s unclear to many outsiders which community is right. You and I know the absurdity of Mochizuki’s claim that Scholze is an idiot, but it’s harder for the rest of the world (read: Japanese funding agencies) to be sure. That could change with a dispute resolution procedure that lay people could readily understand.
Mark Hillery,
Kasia Rejzner’s book (cited in the Quanta article) belongs to the Haag-Kastler school of algebraic field theory and is part of the work of Klaus Fredenhagen and coworkers on QFT in curved space times. It seems to me, that is now possible to prove rigorously much of the folklore “Theorems” of perturbative QFT in this semiclassical scheme rigorously.