Fourier Analysis Notes

This semester I’ve been teaching a course on Fourier Analysis, which has, like just about everything, been seriously disrupted by the COVID-19 situation. Several class sessions have been canceled, and future ones are supposed to resume online next week. To improve matters a bit, I’ve been writing up lecture notes for the material since in-person lectures were canceled, and we’ll see how long I have the energy to keep this up.

The website for the course is here, giving detailed information about what it covers. In terms of level of mathematical rigor, the concept is to use the course as an opportunity to give students some motivation for a conventional real analysis course. The only prerequisite for the course is our usual Calculus sequence, which is not proof-based. In this class students are expected to try and follow proofs given in the book and in class, but not expected to be very good at coming up with their own proofs in the assignments, which mostly are computational. The textbook (Stein-Shakarchi) is based on a Princeton course with a somewhat similar philosophy of providing an introduction to analysis, but it is very challenging for the students to follow. I’ve looked around, but not found a better alternative. Other books on the subject tend to be either books for mathematics students that are even more abstract and challenging, or books for engineers that focus on either signal analysis or PDEs. Since the math department already has a PDE class I want to emphasize other things you can do with the subject.

The first set of lecture notes I wrote up were only loosely connected to Fourier analysis, through the Poisson summation formula. They dealt with theta functions and the zeta function, giving the standard proof of the functional equation for the zeta function that uses Poisson summation. I confess that one reason for covering this material is that I’ve always been fascinated by the connection between theta functions, quantization, representation theory (through the Heisenberg and metaplectic groups), and number theory. This subject contains a wealth of ideas that bring together fundamental physics and deep mathematics. On the mathematics side, this story was generalized by Tate in his thesis, where he developed what is essentially the GL(1) case of the modern theory of automorphic forms that underpins the Langlands program. On the physics side, one can think of what is going on as the standard canonical quantization of a finite-dim phase space, but with a lattice in the phase space giving a discrete subgroup of the usual Heisenberg group, and lots of new structure. For the details of this, one place to look is volume III of Mumford’s books on theta functions.

The second set of lecture notes, which I’ve just started on, are intended as an introduction to the theory of distributions, a topic that isn’t in Stein-Shakarchi. I highly recommend the book by Strichartz referred to in the notes for more details, with the notes maybe best used just as an introduction to that book.

I don’t want to turn this blog into yet one more place for discussion of the COVID-19 situation that just about all of us are obsessed with at the moment. If you’re interested in my personal experience, I’m doing fine. Almost all of us in New York are now pretty much confined to our apartments (it helps that the weather outside today is terrible), other than for short ventures out to get food or some exercise. I’d like to optimistically think that New York started taking action to stop the virus spread early enough to avoid disaster at the local hospitals. The best place I’ve seen to try and follow what is happening there is this web page. We’ll see in the next couple days if the problem has started to peak, or has much further to go.

I’m in much better shape than most people, having left town early for a spring break vacation. I had been planning a trip to Paris, at the very last minute instead rented a car and started out on a road trip in the general direction of New Orleans, consulting coronavirus report maps for where to avoid. Ended up in Memphis and then the Mississippi Delta (it had become clear New Orleans was a bad idea) before finally deciding that the situation was getting serious everywhere and it was time to head home. So, ended up back here in New York in relatively good mental shape for the confinement to come. Good luck to all of us in dealing with the coming challenges…

Update: The semester is now over, and I’ve put together all the notes I wrote up in one document, available here. This fixes mistakes/typos/etc. in earlier versions of the notes, so I’m changing the links to point to the final version.

Posted in Uncategorized | 9 Comments

This Week’s Hype

In this disturbing time of pandemic, it’s reassuring to see that some activities continue as usual. On the string theory hype front, yesterday NASA put out a press release announcing that Chandra Data Tests ‘Theory of Everything’, which starts by explaining that:

Despite having many different versions of string theory circulating throughout the physics community for decades, there have been very few experimental tests. Astronomers using NASA’s Chandra X-ray Observatory, however, have now made a significant step forward in this area.

This is based on a paper announcing limits on axions based on data from the Chandra X-ray telescope, which starts off with the dubious claim that axions “are generic within String Theory”. It seems to be very hard to get some people to understand that the number of “tests of string theory” is not “very few” but zero, for the simple reason that there are no predictions of string theory, generic or otherwise.

As usual, this kind of thing gets picked up by other news sources. In a sign of the times, the spin given to the bogus “test” is now often negative for string theory: This Galaxy Cluster May Have Just Dealt a Major Blow to String Theory.

Update: This is getting attention at The Daily Galaxy, under the headline “Mind of God?” –The Detection of ‘String-Theory’ Particles Would Change Physics Forever”.

For more on religion and string theory, there’s a new (actually from 2017) podcast featuring IAS theorist Tom Rudelius, entitled The Multiverse, the Polygraph, and the Resurrection. In an older podcast at Purpose Nation, Rudelius tells us this about the views of Nima Arkani-Hamed:

To quote preeminent theorist Nima Arkani-Hamed, who is certainly no theist: “The multiverse isn’t a theory. It’s a cartoon, right, it’s like this cartoon picture of something that we might think might be going on but we really don’t have any solid theory of how it would work.”

It seems that Arkani-Hamed shares my views on this.

Posted in This Week's Hype | 3 Comments

Penrose at The Portal

Since last summer Eric Weinstein has been running a podcast entitled The Portal, featuring a wide range of unusual and provocative discussions. A couple have had a physics theme, including one with Garrett Lisi back in December.

One that I found completely fascinating was a recent interview with Roger Penrose. Penrose of course is one of the great figures of theoretical physics, and someone whose work has not followed fashion but exhibited a striking degree of originality. He and his work have often been a topic of interest on this blog: for one example, see a review of his book Fashion, Faith and Fantasy.

Over the years I’ve spent a lot of time thinking about Penrose’s twistors, becoming more and more convinced that these provide just the radical new perspective on space-time geometry and quantization that is needed for further progress on fundamental theory. For a long time now, string theorists have been claiming that “space-time is doomed”, and the recent “it from qubit” bandwagon also is based on the idea that space-time needs to be replaced by something else, something deeply quantum mechanical. Twistors have played an important role in recent work on amplitudes, for more about this a good source is a 2011 Arkani-Hamed talk at Penrose’s 80th birthday conference.

One of my own motivations for the conviction that twistors are part of what is needed is the “this math is just too beautiful not to be true” kind of argument that these days many disapprove of. There are many places one can read about twistors and the mathematics that underlies them. One that I can especially recommend is the book Twistor Geometry and Field Theory, by Ward and Wells. A one sentence summary of the fundamental idea would be

A point in space time is a complex two-plane in complex four-dimensional (twistor) space, and this complex two-plane is the fiber of the spinor bundle at the point.

In more detail, the Grassmanian G(2,4) of complex two-planes in $\mathbf C^4$ is compactified and complexified Minkowski space, with the spinor bundle the tautological bundle. So, more fundamental than space-time is the twistor space T=$\mathbf C^4$. Choosing a Hermitian form $\Omega$ of signature (2,2) on this space, compactified Minkowski space is the set of two-planes in T on which the form is zero. The conformal group is then the group SU(2,2) of transformations of T preserving $\Omega$ and this setup is ideal for handling conformally-invariant theories. Instead of working directly with T, it is often convenient to mod out by the action of the complex scalars and work with $PT=\mathbf{CP}^3$. A point in complexified, compactified space-time is then a $\mathbf{CP}^1 \subset \mathbf{CP}^3$, with the real Minkowski (compactified) points corresponding to $\mathbf{CP}^1$s that lie in a five-dimensional hypersurface $PN \subset PT$ where $\Omega=0$.

On the podcast, Penrose describes the motivation behind his discovery of twistors, and the history of exactly how this discovery came about. He was a visitor in 1963 at the University of Texas in Austin, with an office next door to Engelbert Schucking, who among other things had explained to him the importance in quantum theory of the positive/negative energy decomposition of the space of solutions to field equations. After the Kennedy assassination, he and others made a plan to get together with colleagues from Dallas, taking a trip to San Antonio and the coast. Penrose was being driven back from San Antonio to Austin by Istvan Ozsvath (father of Peter Ozsvath, ex-colleague here at Columbia), and it turned out that Istvan was not at all talkative. This gave Penrose time alone to think, and it was during this trip he had the crucial idea. For details of this, listen to what Penrose has to say starting at about 47 minutes before the end of the podcast. For a written version of the same story, see Penrose’s article Some Remarks on Twistor Theory, which was a contribution to a volume of essays in honor of Schucking.

Posted in Uncategorized | 16 Comments

This Week’s Hype

Sabine Hossenfelder already has this covered, but I wanted to add a few comments about this week’s hype, a new article in Quanta magazine by Philip Ball entitled Wormholes Reveal a Way to Manipulate Black Hole Information in the Lab (based on this paper). It’s the latest in a long tradition of bogus claims that studying relatively simple quantum systems is equivalent to studying string theory/quantum gravity. For an example from ten years ago, see here. The nonsensical idea back then (which got a lot of attention) was that somehow studying four qubits would “test string theory”.

A first comment would be that this is just profoundly depressing, because Ball is one of the best and most sensible science writers around (see my review of his excellent recent book on quantum mechanics) and Quanta magazine is about the the best semi-popular science publication there is. If this article were appearing in any one of the well-known examples of publications that traffic in misleading sensationalism, it wouldn’t be surprising and would best be just ignored.

Hossenfelder has pointed out one problem with the whole idea (we don’t live in AdS space), but a more basic problem is the obvious one pointed out by one of the first commenters at Quanta:

In the end, if an experiment is performed based on standard quantum mechanics, and verifies standard quantum mechanics as expected, then it is irrelevant that this aspect of standard quantum mechanics might be analogous to a vaguely-formulated and incomplete speculative idea about spacetime emergence — nor can it provide any experimental support whatsoever for that idea.

I understand that, for science journalists hearing that a large group of well-known physicists from Google, Stanford, Caltech, Princeton, Maryland and Amsterdam has figured out how to study quantum gravity in the lab (by teleporting things from one place to another via traversable wormholes!!), it’s almost impossible to resist the idea that this is something worth writing about. Please try.

Update: Philip Ball responds here.

Update: More from Philip Ball (and, if it appears, a response from me) at the Quanta article comment section, comments from one of the paper’s author’s also comments here.

Update: Commenter Anonyrat points out that the Atlantic is republishing this piece, as A Tiny, Lab-Size Wormhole Could Shatter Our Sense of Reality: How scientists plan to set up two black holes and a wormhole on an ordinary tabletop.

Update: In the future, I hope to as much as possible outsource coverage of this kind of thing to the Quantum Bullshit Detector. Today, see for instance this.

Posted in This Week's Hype | 32 Comments

Why String Theory Is Both A Dream And A Nightmare (as well as a swamp…)

Ethan Siegel today has a new article at Starts With a Bang, entitled Why String Theory is Both a Dream and a Nightmare. For the nightmare part, he writes:

its predictions are all over the map, untestable in practice, and require an enormous set of assumptions that are unsupported by an iota of scientific evidence.

which I think just confuses the situation, which could be much more accurately and simply described as “there are no predictions”. The fundamental reason for this is also rather simply stated: the supposed unified theory is a theory in ten space-time dimensions, and no one has figured out a way to use this to get a consistent, predictive model with four space-time dimensions. If you don’t believe this, try watching the talks going on in Santa Barbara this week, which feature, after 17 years of intense effort, complete confusion about whether it is possible to construct such models with the right sign of the cosmological constant.

Siegel gets a couple things completely wrong, although this is not really his fault, due to the high degree of complexity and mystification which surrounds the 35 years of failed efforts in this area. About SUSY he writes

For one, string theory doesn’t simply contain the Standard Model as its low-energy limit, but a gauge theory known as N=4 supersymmetric Yang-Mills theory. Typically, the supersymmetry you hear about involves superpartner particles for every particle in existence in the Standard Model, which is an example of an N=1 supersymmetry. String theory, even in the low-energy limit, demands a much greater degree of symmetry than even this, which means that a low-energy prediction of superpartners should arise. The fact that we have discovered exactly 0 supersymmetric particles, even at LHC energies, is an enormous disappointment for string theory.

Like everything else, there’s no prediction from string theory about how many supersymmetries will exist. The special role of N=4 supersymmetric Yang-Mills theory has nothing to do with the problem of low energy SUSY, instead it occurs as the supposed dual to a very special 10d superstring background (AdS5 x S5). This is of interest for completely different reasons, one of which was the hope that this would provide a string theory dual to QCD, allowing use of string theory not to do quantum gravity, but to do QCD computations. This has never worked, with one main reason being that it can’t reproduce the asymptotic freedom property of QCD. Siegel tries to refer to this with

And when you look at the explicit predictions that have come out for the masses of the mesons that have been already discovered, by using lattice techniques, they differ from observations by amounts that would be a dealbreaker for any other theory.

including a table with the caption

The actual masses of a number of observed mesons and quantum states, at left, compared with a variety of predictions for those masses using lattice techniques in the context of string theory. The mismatch between observations and calculations is an enormous challenge for string theorists to account for.

He’s getting this from slide 31 of a talk by Jeff Harvey, but mixing various things up. The table has nothing to with lattice calculations, those are relevant to the other part of the slide, which is about string theory predictions for pure (no fermions) QCD glueballs. These are not physical objects, thus the comparison to lattice computer simulations, not experiment. The table he gives is from here and about real particles. The “predictions” are not made as he claims “using lattice techniques in the context of string theory.” There are no lattice techniques involved.

Normally Siegel does a good job of navigating complex technical subjects. The subject of string theory is now buried in a huge literature of tens of thousands of papers over forty years with all sorts of claims, many designed to obscure the fact that ideas haven’t worked out. It’s fitting that the name chosen for the kind of discussions going on at Santa Barbara this week is “The String Swampland”. String theory verily is now deep in a trackless swamp…

Posted in Swampland, Uncategorized | 6 Comments

Robert Hermann 1931-2020

I was sorry to hear today of the recent death of Robert Hermann, at the age of 88. While I unfortunately never got to meet him, his writing had a lot of influence on me, as it likely did for many others with an overlapping interest in mathematics and fundamental physics. Early in my undergraduate years during the mid-1970s I first ran across some of Hermann’s books in the library, and found them full of fascinating and deep insights into the relations between geometry and physics. Over the years I’ve often come back to them and learned something new about one or another topic. The main problem with his writings is just that there is so much there that it is hard to know where to start.

While the relations between Riemannian geometry and general relativity were well-understood from Einstein’s work in the beginning of the subject, the relations between geometry and Yang-Mills theory were not known by Yang, Mills or other physicists working on the subject during the 1950s and 1960s. The understanding of these relations is conventionally described as starting in 1975, with the BPST instanton solutions and Simons explaining to Yang at Stony Brook about fiber bundles (leading to the “Wu-Yang dictionary” paper). But if you look at Hermann’s 1970 volume Vector Bundles in Mathematical Physics, you’ll find that it contains an extensive treatment of Yang-Mills theory in terms of connections and curvature in a vector bundle. While I don’t know if Hermann had written about the sort of topologically non-trivial gauge field configurations that got attention starting in 1975, he had at that point for a decade been writing in depth about the details of the relations between geometry and physics that were news to physicists in 1975.

Being ahead of your time and mainly writing expository books is unfortunately not necessarily good for a successful academic career. Looking through his writings this afternoon, I ran across a long section of this book from 1980, entitled “Reflections” (pages 1-82). I strongly recommend reading this for Hermann’s own take on his career and the problems faced by anyone trying to do what he was doing (the situation has not improved since then).

A general outline of his early career, drawn from that source is:

1948-50: undergraduate in physics, University of Wisconsin.
1950-52: undergraduate in math, Brown University.
1952-53: Fulbright scholar in Amsterdam.
1953-56: graduate student in math, Princeton. Thesis advisor Don Spencer.
1956-59: instructor at Harvard (“Harvard hired me as an instructor in the mistaken belief that I must be a topologist since I came from Princeton”).
1953-59: “My real work from 1953-59 was studying Elie Cartan!”
1959-61: position at MIT Lincoln Lab, taught course at Berkeley in 1961.

Hermann ultimately ended up at Rutgers, which he left in 1973, because he was not able to teach courses there in his specialty, and felt he had too little time to conduct the research he wanted to work on. It appears he expected to get by with some mix of grant money and profits from running a small publishing operation (Math Sci Press, which mainly published his own books). The “Reflections” section of the book mentioned above also contains some of his correspondence with the NSF, expressing his frustration at his grant proposals being turned down. At the end of a letter from late 1977 (which was at the height of excitement in the physics community over applying ideas from geometry and topology to high energy physics) he writes in frustration:

However, when I look in the Physical Review today, all the subjects which people in your position so enthusiastically supported ten years ago are now dead as the Phlogiston theory – and good riddance – while the topics I was working on then are now everywhere dense. Does one get support from the NSF by being right or by being popular?

John Baez has written something here, and there’s an obituary notice here.

Update: I’ve been reading some more of the essays Hermann published in the “Reflections” section of this book. Especially recommended is the section on Mathematical Physics of this 1979 essay (pages 30-38). His evaluation of the situation of the time I think was extremely perceptive.

Update: For more about Hermann, see some of the comments at this old blog posting. Also, on the topic of his book reviews, see this enthusiastic review of the Flanders book Differential forms with applications to the physical sciences.

Update: For an interesting review covering many of Hermann’s books, at the Bulletin of the AMS in 1973, see here.

Posted in Obituaries | 11 Comments

Various

  • A few months ago I ended up doing a little history of science research, trying to track down the details of the story of the Physical Review’s 1973 policy discouraging articles on “Foundations”. The results of that research are in this posting, where I found this explanation from the Physical Review editor (Goudsmit) of the problem they were trying to deal with:

    The event [referring to a difficult refereeing problem] shows again clearly the necessity of rapid rejections of questionable papers in vague borderline areas. There is a class of long theoretical papers which deal with problems of interpretation of quantum and relativistic phenomena. Most of them are terribly boring and belong to the category of which Pauli said, “It is not even wrong”. Many of them are wrong. A few of the wrong ones turn out to be valuable and interesting because they throw a brighter light on the correct understanding of the problem. I have earlier expressed my strong opinion that most of these papers don’t belong in the Physical Review but in journals specializing in the philosophy and fundamental concepts of physics.

    I had heard that people studying foundations of quantum mechanics, frustrated by this policy, had started up during the 1970s their own samizdat publication, called “Epistemological Letters”. I tried to see if there was any way to read the articles that appeared in that form, but it looked like the only way to do this would be to go visit one or two archives that might have some copies. Unbeknownst to me, around the same time Notre Dame University had just finished a project of scanning all issues of Epistemological Letters and putting them online. They are now available here, with an article about them here and an introductory essay here.

  • There’s an interesting essay on the arXiv about the current state of BSM physics, by HEP theorist Goran Senjanović, entitled Natural Philosophy versus Philosophy of Naturalness.
  • Here’s an article about problems string theorist Amer Iqbal has been having in Pakistan.
  • The New York Times has an article about Cedric Villani and his campaign for mayor of Paris. The election is next month, and I’m having a hard time figuring out why Villani is running. There doesn’t seem to be a lot of difference in policy views between the current mayor (Hidalgo) and the Macronistas (Griveaux and Villani), with the main effect of Villani entering the race a splitting of the Macron party vote.
  • I was sorry to hear recently about the death of mathematician Louis Nirenberg. Kenneth Chang at the New York Times has written an excellent obituary. Terry Tao has some comments here.

Update: Excellent rant on Twitter from Philip Ball about misrepresentations of the Copenhagen interpretation. For your own rants, please engage in them on Twitter rather than here.

Posted in Uncategorized | 20 Comments

London Calling with Career Opportunities II

If you’re a mathematician, you don’t need to go work for Dominic Cummings in order to have dramatically improved career opportunities in the UK. The British government has just announced a huge increase in funding for mathematical research: 60 million pounds/year (or about \$80 million dollars) for the next five years (see here and here). To get some idea of the scale of this, note that the US GDP is about 8 times the UK’s and the NSF DMS budget is about \$240 million/year. So the comparable scale of this funding in the US would be about two and a half times the NSF budget for mathematics.

Many of my mathematician colleagues have sometimes seemed to me to be of the opinion that a huge increase in funding for math research is the best way to improve a society. We’ll see if this works for Britain.

While the new UK government ran on a nativist platform of restricting immigration, with the goal of keeping outsiders from taking bread out of the mouths of UK citizens, this doesn’t apply to mathematicians: all limits are off and we’re encouraged to flood the country. The law will be changed on Friday, changes go into effect Feb. 20. This will include an “accelerated path to settlement”, no need to even have a job offer, and all your “dependents [will] have full access to the labour market”, no problem with them and the taking the bread out of the mouths of the locals thing.

Update: More here (except it’s mostly behind a paywall, but evidently Ivan Fesenko is quoted).

Posted in Uncategorized | 25 Comments

This and That

  • I was sorry to hear of the death a few months ago of Tony Smith, who had been a frequent commenter on this blog and others. Unfortunately my interactions with him mainly involved trying to discourage him from hijacking the discussion in some other (often unusual) direction. Geoffrey Dixon did get to know him well, and has written a wonderful long blog entry about Tony, which I highly recommend (Dixon’s newish blog also has other things you might find interesting).
  • On the Jim Simons front, the Simons Foundation has put together something to celebrate their 25th anniversary. It explains a lot about their history and what they are doing now, as well as giving some indication of plans for the future. On these topics, read pieces written by Jim Simons and Marilyn Simons. The Foundation has been in a high growth mode, having an increasingly large impact on math and physics research. Their main statement about the future is that the plan is for this to go on for a very long time:

    According to its bylaws, the Simons Foundation is intended to focus almost entirely on research in mathematics and science and to exist in perpetuity. If future leadership abides by these guiding principles, Marilyn and I believe the foundation will forever be a force for good in our society.

    My impression is that the Simons children have their own interests, and foundations with other goals to run.

    News from the \$75 billion source of the money (RenTech) today is that Simons is increasingly turning over control of that to his son Nathaniel, who has been named co-chairman. He has also added four new directors to the board, four of them senior Renaissance executives, and one his son-in-law Mark Heising.

  • There are various IAS-related videos you might want to take a look at:

    Pierre Deligne explaining motives last night.

    Michael Douglas on the use of computers in mathematics.

    A Dutch documentary (not all of it is in Dutch…).

  • If you aren’t regularly reading Scott Aaronson’s blog, you really should be. Latest entries are a detailed report from Davos and a guest post with a compelling argument about a major factor behind the problem of why women leave STEM careers more than men.
  • For the latest on the “It from Qubit” business, see talks at a KITP conference. John Preskill notes “lingering confusion over what it all means”, which makes me glad to hear that I’m not the only one…
Posted in Uncategorized | 7 Comments

Why the foundations of physics have not progressed for 40 years

Sabine Hossenfelder has a new piece out, making many of the same arguments she has been making for a while about the state of fundamental theory in physics. These have a lot in common with arguments that Lee Smolin and I were making in our books published back in 2006. The underlying problem is that the way theorists successfully worked up until the seventies is no longer viable, with the Standard Model working too well, up to the highest energies probed:

The major cause of this stagnation is that physics has changed, but physicists have not changed their methods. As physics has progressed, the foundations have become increasingly harder to probe by experiment. Technological advances have not kept size and expenses manageable. This is why, in physics today, we have collaborations of thousands of people operating machines that cost billions of dollars.

With fewer experiments, serendipitous discoveries become increasingly unlikely. And lacking those discoveries, the technological progress that would be needed to keep experiments economically viable never materializes. It’s a vicious cycle: Costly experiments result in lack of progress. Lack of progress increases the costs of further experiment. This cycle must eventually lead into a dead end when experiments become simply too expensive to remain affordable. A $40 billion particle collider is such a dead end.

I have a somewhat different view about a potential next collider (see here), but agree that the basic question is whether it will be “too expensive to remain affordable.”

What has happened over the last forty years is that the way HEP theory is done has become dysfunctional, in a way that Hossenfelder characterizes as follows:

Instead of examining the way that they propose hypotheses and revising their methods, theoretical physicists have developed a habit of putting forward entirely baseless speculations. Over and over again I have heard them justifying their mindless production of mathematical fiction as “healthy speculation” – entirely ignoring that this type of speculation has demonstrably not workhat woed for decades and continues to not work. There is nothing healthy about this. It’s sick science. And, embarrassingly enough, that’s plain to see for everyone who does not work in the field.

This behavior is based on the hopelessly naïve, not to mention ill-informed, belief that science always progresses somehow, and that sooner or later certainly someone will stumble over something interesting. But even if that happened – even if someone found a piece of the puzzle – at this point we wouldn’t notice, because today any drop of genuine theoretical progress would drown in an ocean of “healthy speculation”…

Why don’t physicists have a hard look at their history and learn from their failure? Because the existing scientific system does not encourage learning. Physicists today can happily make career by writing papers about things no one has ever observed, and never will observe. This continues to go on because there is nothing and no one that can stop it.

This story brings up a lot of complex issues in the philosophy and sociology of science, but to me there’s one aspect of the problem that is relatively simple and deserves a lot more attention than it gets: how do you get theorists to abandon failed ideas and move on to try something else?

The negative LHC results about SUSY have had some effect, but even in this case it’s remarkable how many theorists won’t abandon the failed idea of a SUSY extension of the Standard Model. This was always a highly dubious idea, explaining nothing about the Standard Model and adding a huge number of new degrees of freedom and more than a hundred new undetermined parameters. Not seeing anything at the LHC should have put the final nail in the coffin of that idea. Instead, I see that this past fall MIT was still training its graduate students with a course on Supersymmetric Quantum Field Theories. You can try and argue that SUSY and supergravity theories are worth studying even if they have nothing to do with physics at observable energies, but it is a fact that these are extremely complicated QFTs to work with and have explained nothing. Why encourage grad students to devote the many, many hours it takes to understand the details of this subject, instead of encouraging them to learn about something that hasn’t been a huge failure?

The techniques one gets trained in as a graduate student tend to form the basis of one’s understanding of a subject and have a huge influence on one’s future career and the questions one has the expertise needed to work on. Besides SUSY, string theory has been the other major course topic at many institutions, with the best US grad students often spending large amounts of time trying to absorb the material in Polchinski’s two-volume textbook, even though the motivations for this have turned out to also be a huge failure, arguably the largest one in the history of theoretical physics.

To get some idea of what is going on, I took a look at the current and recent course offerings (on BSM theory, not including cosmology) at the five leading (if you believe US News) US HEP theory departments. I may very well be missing some offered courses, but the following gives some insight into what leading US departments are teaching their theory students. Comparing to past years might be interesting, possibly there’s a trend towards abandoning the whole area in favor of other topics (e.g. cosmology, quantum information, condensed matter).

The places not offering string theory courses this year seem to have had them last year.

Update: Something relevant and worth reading that I think I missed when it came out: Jeremy Butterfield’s detailed review of Lost in Math, which has a lot about the question of why theorists are “stuck”.

Update: There’s some serious discussion of this on Twitter. For those who can stand that format, try looking here and here.

Update: Mark Goodsell has a blog posting about all this here, including a defense of teaching the usual SUSY story to graduate students.

Update: A correspondent pointed me to this recent CERN Courier interview with John Ellis. Ellis maintains his increasingly implausible defense of SUSY, but he’s well aware that times have now changed:

People are certainly exploring new theoretical avenues, which is very healthy and, in a way, there is much more freedom for young theorists today than there might have been in the past. Personally, I would be rather reluctant at this time to propose to a PhD student a thesis that was based solely on SUSY – the people who are hiring are quite likely to want them to be not just working on SUSY and maybe even not working on SUSY at all. I would regard that as a bit unfair, but there are always fashions in theoretical physics.

Posted in Uncategorized | 48 Comments