For physicists:
- For the latest news on US HEP funding, see presentations at this recent HEPAP meeting. It is rarely publicly acknowledged by scientists, but during the Trump years funding for a lot of scientific research research has increased, often dramatically. This has been due not to Trump administration policy initiatives, but instead to the Republican party’s embrace of fiscal irresponsibility whenever there’s a Republican in the White House. After bitter complaints about the size of the budget deficit and demands for reduction in domestic spending during the Obama years, after Trump’s election the congressional Republicans turned on a dime and every year have voted for huge across-the-board spending increases, tax decreases, and corresponding deficit increases. Each year the Trump administration produces a budget document calling for unrealistically large budget decreases which is completely ignored, with Congress passing large increases and Trump signing them into law.
For specific numbers, see for instance page 20 of this presentation, which shows numbers for the DOE HEP budget in recent years. The pattern for FY2020 looks the same: a huge proposed decrease, and a huge likely increase (see the number for the House Mark).
The result of all this is that far greater funds are available than expected during the last P5 planning exercise, so instead of having to make the difficult decisions P5 expected, a wider list of projects can be funded.
For mathematicians;
- Michael Harris has a new article in Quanta magazine, mentioning suggestions by two logicians that the Wiles proof of Fermat’s Last Theorem should be formalized and checked by a computer. He explains why most number theorists think this sort of project is besides the point:
Wiles and the number theorists who refined and extended his ideas undoubtedly didn’t anticipate the recent suggestions from the two logicians. But — unlike many who follow number theory at a distance — they were certainly aware that a proof like the one Wiles published is not meant to be treated as a self-contained artifact. On the contrary, Wiles’ proof is the point of departure for an open-ended dialogue that is too elusive and alive to be limited by foundational constraints that are alien to the subject matter.
I don’t know who the “two logicians” Harris is referring to are, or what the nature of their concerns about the Wiles proof might be. I had thought this might have something to do with number theorist Kevin Buzzard’s Xena Project, but in a comment Buzzard describes such a formalization as currently impractical, with no clear motivation.
Taking a look at the page describing the motivation for the Xena Project, I confess to finding it unconvincing. The idea of revamping the undergraduate math curriculum to make it based on computer checkable proofs seems misguided, since I don’t see at all why this is a good way to teach mathematical concepts or motivate undergraduate students. The complaints about holes in the math literature (e.g. details of the classification of finite simple groups) don’t seem to me to be something that can be remedied by a computer.
- For some cutting-edge number theory, with no computers in sight, see the lecture notes from a recent workshop on geometrization of local Langlands.
- Finally, congratulations to this year’s Shaw Prize winner, Michel Talagrand. Talagrand in recent years has been working on writing up a book on quantum field theory for mathematicians, and I see that Sourav Chatterjee last fall taught a course based on it, producing lecture notes available here.
For a wonderful recent interview with Talagrand, see here.
I first got to know Michel when he started sending me very helpful comments and corrections on my QM book when it was a work in progress. He’s single-handedly responsible for a lot of significant improvements in the quality of the book.I’ve recently received significant help from someone else, Lasse Schmieding, who has sent me a very helpful list of mistakes and typos in the published version of the book. I’ve now fixed just about all of them. Note that the version of the book available on my website has all typos/mistakes fixed. For the published version, there’s a list of errata.
Update: For more about the Michael Harris vs. Kevin Buzzard argument, see here, or plan on attending their face-off in Paris next week.
It’s interesting how my formalisation project divides opinion. Some mathematicians I’ve talked to have been super-enthusiastic; others can’t see the point at all. The same is true with the undergraduates that I’ve been teaching; some lap it up, others find it bewildering and/or pointless. One reason I persevere is simply that it is clearly something different. I spent 25 years as a number theory researcher and lecturer doing, in some sense, the same thing as everyone else. Maybe I just got bored.
Another reason I persevere is that I have bought into the vision of Tom Hales’ Formal Abstracts project https://formalabstracts.github.io/ , which is also different. Hales is going to need lots and lots of definitions of mathematical objects for his project to succeed. So why not train undergraduates to at least be aware that mathematics can be done this way? Commelin, Massot and I have formalised the definition of a perfectoid space in Lean, which convinces me that these languages are now ready for what Hales has planned. However your criticisms are perfectly valid and your feeling about Hales’ project might be similar.
In 15 years’ time this stuff could have completely taken over mathematics. Alternatively it could just be sitting there being not very helpful and of interest only to specialists. I really do not know which way it will go — but one thing is for sure, over the last couple of years I have met smart people with very different opinions about this.
I’ve been quite involved in Lean over the last two years, and as a mathematician my motivations are quite different from the “traditional” explanations for why we ought to formalise mathematics.
Sure, it might be nice if we could worry less about gaps in the literature, but it’s not like I’ve been losing sleep.
Instead, I want interactive theorem provers for the same reason I enjoy computer algebra systems — they might make us more powerful mathematicians. At the moment the state of the art is very far from this point: doing mathematics in any of the existing theorem provers (including my favourite) is *excruciating*, and even more worryingly requires some skills that most mathematicians just don’t have.
Nevertheless, this doesn’t feel inevitable, and in particular there is so much “low hanging fruit” in terms of prospects for making interactive theorem provers easy to use, and more powerful, that I think it is worth investing in.
I personally doubt that computers will ever do the highest-level conceptual mathematics that we’re used to doing. On the other hand, I think the computers will be able to help us with a large part of the work we already do, and I can envisage a future where, when I have a rough idea of how to proceed on a problem, I can hand that idea off to a computer and ask it “Can you sort out the details? If not, can you explain to me where you run into trouble?”
If any mathematicians are curious, please come over to https://leanprover.zulipchat.com/ and say hi! There’s a very friendly and enthusiastic community over there, and we’re very happy to answer questions, provide help, and give suggestions for interesting first projects for exploring interactive theorem provers.
Peter, the link to the presentations at the HEPAP meeting is not working. could you post the correct link?
Thank
Shantanu,
Fixed.
“…during the Trump years funding for a lot of scientific research research has increased, often dramatically” Let’s acknowledge that along with the bad there is some good. I am glad you mentioned this because a lot of people in the liberal/progressive camp suffer from trump-derangement syndrome, which makes them unobjective and therefore unreliable (and would not have reported such a thing as you did).
Supernaut,
I don’t actually think the fiscally irresponsible approach to the US budget of Trump and the Republicans is a good thing at all, even if as a side effect some more HEP research is getting funded. There are plenty of ways the world has become a much worse place over the years that have worked out fine for me and others like me. This doesn’t make them a good thing. I hear Stalin was great on the math and physics research funding front.
All: no, I don’t want to host here a discussion of such larger issues, not specifically about the HEP budget news.
Peter – I completely agree with you.
Hopefully the relevance to larger issues isn’t too much of a distraction: My understanding is big HEP experiments can take decades to plan for and assemble. Further progress takes serious and very long-term commitment. The way the US govt. presently operates, beyond a few months one cannot reliably project whether or not any part of it will be funded. That a certain capacity benefits in any particular budget cycle is merely a fact, effectively disconnected from any future eventuality by caprice. There’s nothing to celebrate, and the current modus operandi surely can’t be good for anyone trying to do basic research.
Scott,
I also want interactive theorem provers to help me save boring work, and hence make us more powerful mathematicians. Currently, using interactive theorem provers doesn’t save any work but multiplies the work needed to write a page of latex math by a factor of 40 or so. Some years ago I had tried to get funding for a large project called FMathL, see https://www.mat.univie.ac.at/~neum/FMathL.html – aimed at doing these things in a way suitable for mathematicians (rather than computer scientists). But I didn’t succeed in convincing funding agencies – the referees dismissed the project as being far too ambitious.
Actually, I think that to some extent a formalisation programme can help with gaps in the literature. If we know about the gap, then we should fill it – and here a computer won’t help. But often we do not know about the gap, and often that occurs in precisely a proof like CFSG where there are many cases to check, and it is easy either to make a mistake in one of these or simply not to have an exhaustive list of cases. Here a formalisation programme will highlight the gap, then we can fill it.
Of course, right now formalisation is so time intensive that it is only going to be done for hugely important results which had better be true (like CFSG). But part of this is because we keep having to add more basics to whichever system we happen to favour; I think the idea is that when CFSG is finally formalised, it will be much easier to do other group / representation theory formally, because most of the stuff one needs will already be in the system.
I’ve done this a bit, and I think it’s a wonderful idea. The fundamental pedagogical problem that computer-checked proofs solve is that they make the feedback cycle much shorter.
As it stands, in the usual approach to teaching undergraduates how to prove things, we assign them a problem set, they go off and do the work and turn it in a week or two later, and then they get marked-up work a week or two after that. This is more than enough time for them to forget all the details of how they were thinking about the problem, and so they struggle with interpreting the feedback we struggled to write.
With a proof assistant, they get told their proof is wrong, the second they make a mistake at the exact proof step they messed up. As a result, the mechanics of proof — quantifiers, case splits, hypotheses and so on — can be taught and learned vastly more easily than in the traditional way. (You can — and I did! — even make the input format to the prover a structured format which is a subset of basic mathematical English.)
The machine can’t learn the concepts for the students, of course, but obviously it’s much easier to understand mathematical arguments once you can read and write proofs fluently — and this is where proof assistants shine.
For an example of an interesting and recent use of a proof assistant in pure mathematics, see the preprint arXiv:1810.04579 by S. Gouëzel and V. Shchur.
I would like to hazard the claim that, if we compare students who have learnt to construct mathematical proofs in the “old-fashioned” way to students who submit their proofs to a proof assistant, then the two groups are not actually learning the same thing.
And I think this should actually be more obvious than it appears to be to most people: since the notion of a proof assistant (together with the underlying (meta-)mathematical theories) “formalizes” the notion of a mathematical proof, the two notions are in fact different, the one being formal, the other not. By seeking to “formalize” a concept, one changes the concept itself.
The traditional mathematical notion of proof is an “intuitive” notion, in that one develops a “feeling” for what is and what is not a proof. One could reduce it to a sociological phenomenon (“a proof is what mathematicians accept as such”). However, even such a move serves to sidestep the question as to what a proof is, by instead answering the question what a certain class of people think it is. By contrast, the notion of proof in the formalized setting is well-defined and formalistic (“a proof is what the following Turing machine will accept as such”).
I think that to equate the two concepts is to make a category mistake. To formalize a certain notion, whatever it is, is to implicitly admit that this notion refers to a definite concept or entity. Otherwise one would not even have a criterion for a good vs. a bad formalization. If the formalization is all there really is, any formalization is as good as any other. To put it more concretely, the relationship in which “formalized proof” stands to “proof” is the same relationship in which “painting” stands to “model”.
It is a belief of mine that there is a certain tendency in human beings (but mathematicians especially) to wish to deny as much as possible the existence of the intuitive and the non-formal, and to replace intuitive concepts by formalized notions, so that they can afterwards say that the formalized notion is what we were trying say all along. But in fact there is a very big difference, and it comes out especially in teaching. Students often want (or need) to know the “point” of a certain formalization, and rightly get bored by involving themselves in purely formalistic work without any underlying motivation.
Which is not to say that I do not approve of the idea of using proof assistants in teaching mathematics. But I do see the danger of creating a generation of students who are learning to prove theorems as just another kind of game, without really understanding what they are doing or why they are doing it. I think there is more enough of that as it is.
@Past-life mathematician: in my course, the students have the *option* to formalise their solutions. I think there is definitely something in what you say, but I do not think (and indeed, I do not think you’re saying) that showing them that this *option* exists is somehow cheating them, or telling them that mathematics is something which it is not. It’s analogous to showing them that they have the option to write their problem sheet solutions in LaTeX, rather than pen and paper. I believe that showing them this other way of thinking about the ideas I’m teaching them is broadening their minds. I’d like to think that as a teacher, this is part of my job.
Dear Kevin: I wasn’t criticizing you or your program. I have read your statement that Peter linked to where you explain your reasons for involving yourself in this type of research, and I agree with a lot of what you write. Especially your worries about a future loss of knowledge, and the fact that there is so much “folklore knowledge” that hasn’t been written down anywhere, or if it has is contained in letters of so-and-so to someone else. I’ve encountered my fair bit of this while in academia, and I’ve always found it indefensible. I mean using private letters as a bibliographical reference in journal articles, especially when the reference is invoked in a proof. It runs completely counter to what I think should be the academic spirit: self-critical, transparent, maximally thorough. I think your program can only contribute towards that ideal, and I love that you are sticking up for it.
I was mainly responding to Neel Krishnaswami’s seemingly blanket endorsement of basing the curriculum on proof assistants. I think if you do tha, you really lose something essential, as I’ve tried to argue.
Two A_R’s on the same line on page 527 of your corrected pdf, Peter. (Para right above section 48.3.)
Past-life math (and Kevin Buzzard),
I think the issue is more what the human activity of mathematics “is”, than what-is-proof (which has been clear in principle since Euclid). When serious mathematical communication starts to involve executable components by default, there will be an irresistible advantage to having definitions and proofs be machine readable, like engineers exchanging parametric CAD specification instead of diagrams or blueprints.
For now, we are still in the age of mathematical communication as a form of storytelling, and in that setting (particularly teaching) the discipline of extra formality for its own sake has benefits that are mostly orthogonal to the usual purposes of the communication. Unless the pedagogical added value of that discipline turns out to be very large for some reason, I think it will remain unpopular until mathematical communication itself is permeated by add-ons (e.g., functions, graphs, diagrams, computer programs, tablebase searches, etc accompanying almost every paragraph), and then the ability to generate such “animations” automatically from formalized definitions and proofs will create the demand for baked-in formalization. It’s hard to see it being necessary for teaching of today’s style of mathematics.
chethan krishnan,
Thanks! Fixed.