In recent years I’ve been struggling with depressive thoughts whenever I think about what’s been going on in the field of fundamental theoretical physics research. As an example of what I find depressing, today I learned that the Harvard Physics department has not only a Harvard Swampland Initiative, but also a Gravity, Space-Time, and Particle Physics (GRASP) Initiative, which this week is hosting a conference celebrating 25 years of Randall-Sundrum. Things at my alma mater are very different than during my student years, which lacked “Initiatives”, but featured Glashow, Weinberg, Coleman, Witten and many others doing amazing things.
For those too young to remember, Randall-Sundrum refers to large extra dimension models that were heavily overhyped around the end of the last millennium. These led to ridiculous things like NYT stories about how Physicists Finally Find a Way To Test Superstring Theory, as well as concerns that the LHC was going to destroy the universe by producing black holes. At the time 25 years ago, hearing this nonsense was really annoying. I had assumed that it was long dead, but no, zombie theoretical physics ideas it seems are all the rage, at Harvard and elsewhere.
One consolation of recent years has been that I figured things really couldn’t get much worse. Today though, I realized that such thoughts were highly naive. A few days ago Steve Hsu announced that Physics Letters B has published an article based on original work by GPT5 (arXiv version here). Jonathan Oppenheim has taken a look and after a while realized the paper was nonsense (explained here). He writes:
The rate of progress is astounding. About a year ago, AI couldn’t count how many R’s in strawberry, and now it’s contributing incorrect ideas to published physics papers. It is actually incredibly exciting, to see the pace of development. But for now the uptick in the volume of papers is noticeable, and getting louder, and we’re going to be wading through a lot of slop in the near term. Papers that pass peer review because they look technically correct. Results that look impressive because the formalism is sophisticated. The signal-to-noise ratio in science is going to get a lot worse before it gets better.
The history of the internet is worth remembering : we were promised wisdom and universal access to knowledge, and we got some of that, but we also got conspiracy theories and misinformation at unprecedented scale.
AI will surely do exactly this to science. It will accelerate the best researchers but also amplify the worst tendencies. It will generate insight and bullshit in roughly equal measure.
Welcome to the era of science slop!
Given the sad state of affairs in this field before automated science slop generation came along, I think Oppenheim is being far too optimistic. There currently is no mechanism to recognize and suppress bullshit in this area, together with strong pressures to produce more bullshit. I hope that I’m wrong, but I fear we’re about to be inundated with a tsunami of slop which will bury the field completely.

