SUSY 2013

The big yearly SUSY conference, SUSY 2013 has been going on in Trieste this past week. From the experimentalists, the news is just stronger limits: no hint of SUSY anywhere in the LHC data. From the theorists, the reaction to this news has been pretty consistent: despite what people say, not a problem.

According to John Ellis, everything is fine, with MSSM SUSY preference for a Higgs below 130 GeV vindicated and successful SUSY predictions for the Higgs couplings (that they should be the same as if there were no SUSY). According to Ellis, we just need to be patient, and he has CMSSM fits preferring 2 TeV gluinos.

However, if you look at Savas Dimopoulos’s talk the MSSM gets a grade of D-. He argues that the LHC has shown us that the answer is the Multiverse, and that split SUSY with its fine-tuning gets a grade of A. The grade inflation in particle physics is pretty dramatic: you now can get an A without your theory having the slightest bit of experimental evidence.

Nima Arkani-Hamed’s talk was about SUSY in 2033, which in his vision will be pretty much the same as SUSY in 2010. Remember all those things the LHC was supposed to find but didn’t? Well, now the argument is that they’re really there, but we will need a 100 TeV collider to see them. If all goes well, in 2033 such a machine will be under construction, and SUSY 2033 could feature all the SUSY 2010 talks retreaded, with 1 TeV gluinos moved up to 10 TeV.

One of Arkani-Hamed’s slides makes me worry that the LHC results have caused him to begin to lose his marbles. He claims that if one doesn’t see new physics like SUSY at the 100 TeV machine, in his view

this would be 100 times more shocking and dramatic than no nothing but Higgs at the LHC

[I initially misread and misquoted this, my apologies. The claim was not that no BSM at 100 TeV would be 100 times more shocking than no Higgs at the LHC, but that it would be 100 times more shocking than no BSM at the LHC. So, the usual sort of over-the-top exaggeration, but not crazy.]

Even wilder claims came form Gordon Kane in his talk, where we’re told that particle theorists giving “negative” talks because of the LHC results have:

no knowledge of LHC physics, Higgs physics, supersymmetry, phenomenology, etc.

According to Kane, we not only have seen the tip of the iceberg of a unified string/M-theory, but actually have the whole iceberg. The ingredients are all in place for what he sees as a similar experience to the 3 year period in the 1970s when the Standard Model emerged and was experimentally vindicated.

Tommaso Dorigo points out that there was one SUSY 2013 talk that in his humble opinion was a good candidate for the IgNobel, see here (warning, NSFW).

On a more positive note, at the conference production of compactified Calabi-Yaus was finally conclusively demonstrated.

Update: Nathaniel Craig has some recent lectures on The State of Supersymmetry after Run I of the LHC. The emphasis is on examining the consequences of failure of pre-LHC assumptions about SUSY based on simplicity and naturalness. Out of 60 or so pages, only one is devoted to the models favored by Arkani-Hamed and Dimopoulos, string-theory based models are not even mentioned.

Posted in Uncategorized | 20 Comments

Two Cultures

There are two workshops going on this week that you can follow on video, getting a good idea of the latest discussions going on at two different ends of the spectrum of particle theory in the US today.

At the KITP in Santa Barbara there’s Black Holes: Complementarity, Fuzz or Fire?. As far as I can tell, what’s being discussed is the black hole information paradox reborn. It all started with Joe Polchinski and others last year arguing that the consensus that AdS/CFT had solved this problem was wrong. See Polchinski’s talk for more of this argument from him.

If thinking about and discussing deep conceptual issues in physics without much in the way of mathematics is your cup of tea, this is for you (and so, I fear, not really for me). As a side benefit you get to argue about science-fiction scenarios of whether or not you’d get incinerated falling into a black hole, while throwing around the latest buzz-words: holography, entanglement, and quantum information. If you like trendy, and you don’t like either deep mathematics or the nuts and bolts of the experimental side of science, it doesn’t get much better than this. One place you can follow along the latest is John Preskill’s Twitter feed.

Over on the other coast, at the opposite intellectual extreme of the field, LHC phenomenologists are meeting at the Simons Center this week at a SUSY, Exotics and Reaction to Confronting Higgs workshop. They’re discussing very much those nuts and bolts, those of the current state of attempts to analyze LHC data for any signs of something other than the Standard Model. Matt Strassler is there, and he is providing summaries of the talks at his blog (see here and here) At this workshop, still no deep mathematics, but extremely serious engagement with experiment. One thing that’s apparent is that this field of phenomenology has become a much more sober business than a few years ago, pre-LHC, and pre-no evidence for SUSY. Back then workshops like this featured enthusiastic presentations about all the wonderful new particles, forces and dimensions the LHC was likely to find, with one of the big problems being discussed the “LHC inverse problem” of how people were going to disentangle all the complex new physics the LHC would discover. Things have definitely changed.

One anomaly at the SEARCH workshop was Arkani-Hamed’s talk on naturalness, which started off in a promising way as he said he would give a different talk than his recent ones, discussing various ideas about solving the naturalness problem (though they didn’t work, but might be inspirational). An hour later he was deep into the same generalities and historical analogies about naturalness as in other talks, headed into 15 minutes of promotion of anthropics and the multiverse. He ended his trademark 90 minute one-hour talk with a 15 minute or so discussion of a couple failed ideas about naturalness, and for these I’ll refer you to Matt here.

Arkani-Hamed and others then went into a panel discussion, with Patrick Meade introducing the panelists as having “different specialties, ranging from what we just heard to actually doing calculations and things like this.”

Update: Scott Aaronson now has a blog posting about the KITP workshop here.

Update: A summary of the situation from John Preskill is here.

Posted in Uncategorized | 70 Comments

Belief in multiverse requires exceptional vision

Tom Siegfried at Science News has a new piece about how Belief in multiverse requires exceptional vision that starts off by accusing critics of multiverse mania of basically being ignoramuses who won’t accept the reality of anything they can’t see with their own eyes, like those in the past who didn’t believe in atoms, or superstrings:

If you can’t see it, it doesn’t exist. That’s an old philosophy, one that many scientists swallowed whole. But as Ziva David of NCIS would say, it’s total salami. After all, you can’t see bacteria and viruses, but they can still kill you.

Yet some scientists still invoke that philosophy to deny the scientific status of all sorts of interesting things. Like the theoretical supertiny loops of energy known as superstrings. Or the superhuge collection of parallel universes known as the multiverse.

It’s the same attitude that led some 19th century scientists and philosophers to deny the existence of atoms.

The problem with the multiverse of course is not that you can’t directly observe it, but that there’s no significant evidence of any kind for it: it’s functioning not as a testable scientific explanation, but as an excuse for the failure of ideas about unification via superstring theory. Siegfried makes this very clear, with his argument specifically aimed at those who deny the existence of “supertiny loops of energy known as superstrings”, putting such a denial in the same category as denying the existence of atoms. Those who deny the existence of superstrings don’t do so because they can’t see them, but because there’s no scientific evidence for them and no testable predictions that would provide any.

Siegfried has been part of the string theory hype industry for a long time now, and was very unhappy with my book, which he attacked in the New York Times (see here) as misguided and flat-out wrong for saying string theory made no predictions. According to him, back in 2006:

…string theory does make predictions — the existence of new supersymmetry particles, for instance, and extra dimensions of space beyond the familiar three of ordinary experience. These predictions are testable: evidence for both could be produced at the Large Hadron Collider, which is scheduled to begin operating next year near Geneva.

We now know how that turned out, but instead of LHC results causing Siegfried to become more skeptical, he’s doubling down, with superstring theory now accepted science and the multiverse its intellectual foundation.

The excuse for Siegfried’s piece is the Wilczek article about multiverses that I discussed here, where I emphasized only one part of what Wilczek had to say, the part with warnings. Siegfried ignores that part and based on Wilczek’s enthusiasm for some multiverse research takes him as a fellow multiverse maniac and his article as a club to beat those without the exceptional vision necessary to believe in superstrings and the multiverse. Besides David Gross, I’m not seeing a lot of prominent theorists standing up to this kind of nonsense, leaving those invested in failed superstring ideology with the road clear to turn fundamental physics into pseudo-science, helped along by writers like Siegfried.

Update: A commenter points to this from Wilczek, noting his lesser multiverse enthusiam than Siegfried’s.

Update: Ashutosh Jogalekar at The Curious Wavefunction has a similar reaction to the Siegfried piece.

Update: There’s an FQXI podcast up now (see here), with Wilczek discussing the multiverse.

Posted in Multiverse Mania | 73 Comments

Quick Links

  • At HEP blogs you should be reading already, there’s Tommaso Dorigo on 5 sigma (with more promised to come), and Jester on the lack of a definite BSM energy scale. Jester puts his finger on the big problem facing HEP physics. In the past new machines could be justified since we could point to new phenomena that pretty much had to turn up in the energy range being opened up by the machine (Ws and Zs at the SPS, the top at the Tevatron, the Higgs at the LHC). Now though, there’s nothing definite to point to as likely to show up at the energy scale of a plausible next machine. Jester includes a graphic from a recent Savas Dimopoulos talk characterizing the current situation in terms of chickens running around with their heads cut off, which seems about right.
  • The black hole information paradox has been around for nearly forty years, with the story 10 years ago that it supposedly had been resolved by AdS/CFT and string theory. For the past year or so arguments have been raging about “firewalls” and a version 2.0 of the paradox, which evidently now is not resolved by AdS/CFT and string theory. I couldn’t tell if there was much to this argument, but the fact that there’s a Lubos rant about how it’s all nonsense made me think maybe there really is something to it. As usual though, my interest in quantum gravity questions that have nothing to say about unification is limited. For those with more interest in this, I’ll just point to today’s big article in the New York Times, and next week’s workshop at KITP where the latest iterations will get hashed out. For more on the challenge this argument poses to the idea that AdS/CFT gives a consistent picture of quantum gravity, see this recent talk by Polchinski.
  • For another challenge to orthodoxy from someone at UCSB, Don Marolf has a new preprint out arguing that strings are not needed to understand holography:

    Stringy bulk degrees of freedom are not required and play little role even when they exist.

Posted in Uncategorized | 9 Comments

Never Give Up

Alok Jha has a piece in the Guardian yesterday about the failure to find SUSY. His conclusion I think gets the current situation right:

Or, as many physicists are now beginning to think, it could be that the venerable theory is wrong, and we do not, after all, live in a supersymmetric universe.

An interesting aspect of the article is that Jha asks some SUSY enthusiasts about when they will give up if no evidence for SUSY appears:

[Ben] Allanach says he will wait until the LHC has spent a year or so collecting data from its high-energy runs from 2015. And if no particles turn up during that time? “Then what you can say is there’s unlikely to be a discovery of supersymmetry at Cern in the foreseeable future,” he says.

Allanach has been at this for about 20 years, and here’s what he has to say about the prospect of failure:

If the worst happens, and supersymmetry does not show itself at the LHC, Allanach says it will be a wrench to have to go and work on something else. “I’ll feel a sense of loss over the excitement of the discovery. I still feel that excitement and I can imagine it, six months into the running at 14TeV and then some bumps appearing in the data and getting very excited and getting stuck in. It’s the loss of that that would affect me, emotionally.”

John Ellis has been in the SUSY business even longer, for 30 years or so and he’s not giving up:

Ellis, though confident that he will be vindicated, is philosophical about the potential failure of a theory that he, and thousands of other physicists, have worked on for their entire careers.

“It’s better to have loved and lost than not to have loved at all,” he says. “Obviously we theorists working on supersymmetry are playing for big stakes. We’re talking about dark matter, the origins of mass scales in physics, unifying the fundamental forces. You have to be realistic: if you are playing for big stakes, very possibly you’re not going to win.”

But, just because you’re not going to win, that doesn’t mean you have to ever admit that you lost:

John Ellis, a particle theorist at Cern and King’s College London, has been working on supersymmetry for more than 30 years, and is optimistic that the collider will find the evidence he has been waiting for. But when would he give up? “After you’ve run the LHC for another 10 years or more and explored lots of parameter space and you still haven’t found supersymmetry at that stage, I’ll probably be retired. It’s often said that it’s not theories that die, it’s theorists that die.”

There may be a generational dividing line somewhere in the age distribution of theorists, with those above a certain age likely to make the calculation that, no matter how bad things get for SUSY and string theory unification, it’s better to go to the grave without admitting defeat. The LHC will be in operation until 2030 or so, and you can always start arguing that 100 TeV will be needed to see SUSY (see here), ensuring that giving up won’t ever be necessary except for those now still wet behind the ears.

For another journalist’s take on the state of SUSY, this one Columbia-centric and featuring me as skeptic, see here.

Posted in Uncategorized | 60 Comments

The Next Machine

For the last week or so US HEP physicists have been meeting in Minneapolis to discuss plans for the future of  US HEP.  Some of the discussions can be seen by looking through the various slides available here.   A few days earlier Fermilab hosted TLEP13, a workshop to discuss plans for a new very large electron-positron machine.   There is a plan in place (the HL-LHC) for upgrading the LHC to higher luminosity, with operations planned until about 2030.   Other than this though, there are no current definite plans for what the next machine at the energy frontier might be. Some of the considerations in play are as follows:

  • The US is pretty much out of the running, with budgets for this kind of research much more likely to get cut than to get the kinds of increases a new energy frontier machine would require. Projects with costs up to around $1 billion could conceivably be financed in coming years, but for the energy frontier, one is likely talking about $10 billion and up.
  • Pre-LHC, attention was focused on prospects for electron-positron linear colliders, specifically the ILC and CLIC projects. The general assumption was that LEP, which reached 209 GeV in 2000, was the last circular electron-positron collider. The problem is that, at fixed radius, synchrotron radiation losses grow as the fourth-power of the energy, and LEP was already drawing a sizable fraction of the total power available at Geneva. Linear accelerators don’t have this problem, but they do have problems achieving high luminosity since one is not repeatedly colliding the same stored bunches.

    The hope was that the LHC would discover not just the Higgs, but all sorts of new particles. Once the mass of such new particles was known, ILC or CLIC technology would give a design of an appropriate machine to study such new particles in ways that not possible at a proton-proton machine. These hopes have not worked out so far, making it now appear quite unlikely that there are such new particles at ILC/CLIC accessible energies. It remains possible that the Japanese will decide to fund an ILC project, even without the appealing target of a new particle besides the Higgs to study.

  • The LHC has told us the Higgs mass, making it now possible to consider what sort of electron-positron collider would be optimal for studying the physics of the Higgs, something one might call a “Higgs factory”. It turns out that a center of mass energy of about 240 GeV is optimal for Higgs production. This is easily achievable with the ILC, but since it is not that much higher than LEP, there is now interest in the possibility of a circular collider as a Higgs factory. There is a proposal called LEP3 (discussed on this blog here) for putting such a collider in the LHC tunnel, but it is unclear whether such a machine could coexist with the LHC, and no one wants to shutdown the LHC before a 2030 timescale.
  • Protons are much heavier than electrons, so synchrotron radiation losses are not the problem, but the strength of the dipole magnets needed to keep them in a circular orbit is. To get to higher proton-proton collision energies in the same tunnel, one needs higher strength magnets, with energy scaling linearly with field strength. The LHC magnets are about 8 Tesla, current technology limit is about 11 Tesla for appropriate magnets. The possibility of an HE-LHC, operating at 33 TeV with 20 Tesla magnets is under study, but this technology is still quite a ways off. Again, the time-scale for such a machine would be post-2030.
  • The other way to get to higher proton-proton energies is to build a larger ring, with energy scaling linearly with the size of the ring (for fixed magnet strength). Long-term thinking at CERN now seems to be focusing on the construction of a much larger ring, of size 80-100 km. One could reach 100 TeV energies with either 20 Tesla magnets and an 80 km ring, or 16 Tesla magnets and a 100 km ring (such a machine is being called a VHE-LHC). If such a tunnel were to be built, one could imagine first populating it with an electron-positron collider, and this proposal is being called TLEP. It would operate at energies up to 350 GeV and would be an ideal machine for precision studies of the Higgs. It could also be used to operate at very high luminosity at lower energies, significantly improving on electroweak measurements made at LEP (the claim is that LEP-size data sets could be reproduced in each 15 minutes of running). Optimistic time-lines would have TLEP operating around 2030, replaced by the VHE-LHC in the 2040s.
  • For more about TLEP, see the talks here. The final talk of the TLEP workshop wasn’t about TLEP, but Arkani-Hamed on the VHE-LHC (it sounds like maybe he’s not very interested in the Higgs factory idea). He ends with

    EVERY student/post-doc/person with a pulse (esp. under 35) I know is ridiculously excited by even a glimmer of hope for a 100 TeV pp collider. These people don’t suffer from SSC PTSD.

    Looking at the possibilites, I do think TLEP/VHE-LHC looks like the currently most promising route for the future for CERN and HEP physics (new technology might change this, i.e. a muon collider). Maybe I don’t have a pulse though, since I can’t say that I’m ridiculously excited by just a glimmer of VHE-LHC hope for a time-frame past my life-expectancy.

    A 100 km tunnel would be even larger than the planned SSC tunnel (89 km) and one doesn’t have to suffer from SSC post-traumatic-stress-disorder to worry about whether a project this large can be successfully funded and built (In very rough numbers I’d guess one is talking about costs on the scale of $20 billion). My knowledge of EU science funding issues is insufficient to have any idea if the money for something on this scale is a possibility. On the other hand, with increasing concentration of all wealth in the hands of an increasingly large number of multi-billionaires, perhaps this just needs the right rich guy for it to happen.

    Someone is going to have to do a better job than Arkani-Hamed in terms of finding an argument that will sell this to rest of the scientific community. His main argument is that such a machine would allow us to improve the ultimate LHC number of “fine-tuning” being at least 10-2 to a number like 10-4, or maybe finally see some SUSY particles. I don’t think this argument is going to get $20 billion: “we thought we’d see all this stuff at the LHC because we were guessing some number we don’t understand was around one. We saw nothing and turns out the number is small, no bigger than one in a hundred. Now we’d like to spend $20 billion to see if it’s smaller than one in a hundred, but bigger than one in ten thousand.”

Posted in Experimental HEP News | 21 Comments

Latest from the Stacks Project

My colleague Johan de Jong for the last few years has been working on an amazing mathematical endeavor he calls the “Stacks Project”. As boring 20th century technology, this is a work-in-progress document (now nearly 4000 pages), available here. But from the beginning Johan (known to his friends as “the Linus Torvalds of algebraic geometry”) has conceptualized this as an open-source project using 21st century technology, including a blog and a github repository.

As of last night, the Stacks Project has many new features, courtesy of impressive work by Johan’s collaborator on this, Pieter Belmans. I was going to write something here describing the new features and how cool they are, but a much better job of this has been done by Cathy O’Neil, aka Mathbabe (by the way, if you’re not reading Cathy’s blog, you should be…). With her permission, I’m cross-posting her new blog entry about this, so, what follows is from Cathy:

The Stacks Project gets ever awesomer with new viz

Here’s a completely biased interview I did with my husband A. Johan de Jong, who has been working with Pieter Belmans on a very cool online math project using d3js. I even made up some of his answers (with his approval).

Q: What is the Stacks Project?

A: It’s an open source textbook and reference for my field, which is algebraic geometry. It builds foundations starting from elementary college algebra and going up to algebraic stacks. It’s a self-contained exposition of all the material there, which makes it different from a research textbook or the experience you’d have reading a bunch of papers.

We were quite neurotic setting it up – everything has a proof, other results are referenced explicitly, and it’s strictly linear, which is to say there’s a strict ordering of the text so that all references are always to earlier results.

Of course the field itself has different directions, some of which are represented in the stacks project, but we had to choose a way of presenting it which allowed for this idea of linearity (of course, any mathematician thinks we can do that for all of mathematics).

Q: How has the Stacks Project website changed?

A: It started out as just a place you could download the pdf and tex files, but then Pieter Belmans came on board and he added features such as full text search, tag look-up, and a commenting system. In this latest version, we’ve added a whole bunch of features, but the most interesting one is the dynamic generation of dependency graphs.

We’ve had some crude visualizations for a while, and we made t-shirts from those pictures. I even had this deal where, if people found mathematical mistakes in the Stacks Project, they’d get a free t-shirt, and I’m happy to report that I just last week gave away my last t-shirt. Here’s an old picture of me with my adorable son (who’s now huge).

tshirt_stacks

Q: Talk a little bit about the new viz.

A: First a word about the tags, which we need to understand the viz.

Every mathematical result in the Stacks Project has a “tag”, which is a four letter code, and which is a permanent reference for that result, even as other results are added before or after that one (by the way, Cathy O’Neil figured this system out).

The graphs show the logical dependencies between these tags, represented by arrows between nodes. You can see this structure in the above picture already.

So for example, if tag ABCD refers to Zariski’s Main Theorem, and tag ADFG refers to Nakayama’s Lemma, then since Zariski depends on Nakayama, there’s a logical dependency, which means the node labeled ABCD points to the node labeled ADFG in the entire graph.

Of course, we don’t really look at the entire graph, we look at the subgraph of results which a given result depends on. And we don’t draw all the arrows either, we only draw the arrows corresponding to direct references in the proofs. Which is to say, in the subgraph for Zariski, there will be a path from node ABCD to node ADFG, but not necessarily a direct link.

Q: Can we see an example?

Let’s move to an example for result 01WC, which refers to the proof that “a locally projective morphism is proper”.

First, there are two kinds of heat maps. Here’s one that defines distance as the maximum (directed) distance from the root node. In other words, how far down in the proof is this result needed? In this case the main result 01WC is bright red with a black dotted border, and any result that 01WC depends on is represented as a node. The edges are directed, although the arrows aren’t drawn, but you can figure out the direction by how the color changes. The dark blue colors are the leaf nodes that are farthest away from the root.

Another way of saying this is that the redder results are the results that are closer to it in meaning and sophistication level.

Note if we had defined the distance as the minimum distance from the root node (to come soon hopefully), then we’d have a slightly different and also meaningful way of thinking about “redness” as “relevance” to the root node.

This is a screenshot but feel free to play with it directly here. For all of the graphs, hovering over a result will cause the statement of the result to appear, which is awesome.

Stacks Project — Force (depth) 01WC

Next, let’s look at another kind of heat map where the color is defined as maximum distance from some leaf note in the overall graph. So dark blue nodes are basic results in algebra, sheaves, sites, cohomology, simplicial methods, and other chapters. The link is the same, you can just toggle between the different metric.

Stacks Project — Force (height) 01WC

Next we delved further into how results depend on those different topics. Here, again for the same result, we can see the extent to which that result depends on the different on results from the various chapters. If you scroll over the nodes you can see more details. This is just a screenshot but you can play with it yourself here and you can collapse it in various ways corresponding to the internal hierarchy of the project.

Stacks Project — Collapsible 01WC

Finally, we have a way of looking at the logical dependency graph directly, where result node is labeled with a tag and colored by “type”: whether it’s a lemma, proposition, theorem, or something else, and it also annotates the results which have separate names. Again a screenshot but play with it here, it rotates!

Stacks Project — Cluster 01WC

 

Check out the whole project here, and feel free to leave comments using the comment feature!

Posted in Uncategorized | 12 Comments

Bankrupting Physics

I just spent a depressing and tedious few hours reading through Bankrupting Physics, an English translation of Alexander Unzicker’s 2010 Von Urknall zum Durchknall written in German.

When I started reading the thing I wasn’t expecting much, but figured it would be some sort of public service to take the time to identify what Unzicker had to say that made sense and what didn’t, and then write something distinguishing the two here. After a while though, it became clear that Unzicker is just a garden-variety crank, of a really tedious sort. Best advice about the book would be the usual in this situation, just ignore it, since no good can possibly come from wasting time engaging with this nonsense. I have no idea why any publisher, in Germany or here, thought publishing this was a good idea.

If you must know though, here’s a short summary of what’s in the book. The first half is about gravitation, cosmology and astrophysical observations. Unzicker’s obsessive idea, shared with innumerable other cranks, is that any scientific theory beyond one intuitively clear to them must be nonsense. Similarly, any experimental result beyond one where they can easily understand and analyze the data themselves is also nonsense. He’s a fan of Einstein, although thinks general relativity somehow needs to be fixed, something to do with it getting phenomena involving small accelerations wrong. There’s endless complaints about how cosmology involves too many parameters, and dark matter/energy shows that physicists really understand nothing.

When he gets to particle physics, we learn that things went wrong back when physicists started invoking a symmetry that wasn’t intuitively obvious, isospin symmetry. According to Unzicker, symmetries in particle theory are all a big mistake, “the standard model barely predicts anything”, “the standard model can actually accommodate every result”, and endless other similar nonsense. As for the experimental side of things, he takes a comment from Feynman about renormalization in QED, claims it means that there is no understanding of production of photons at high energy, then uses this to describe as “It’s just ridiculous” data analysis at HEP experiments. High energy physics experiments are all just a big scam, with the physicists involved unwilling to admit this, since they’ve wasted so much money on them.

The last part of the book contains lots of criticism of string theory, etc., much of it parroting my book and blog. According to Unzicker:

Woit does a great job in debunking the string and SUSY crap. Unfortunately, he has pretty mainstream opinions with respect to the Standard Model.

Well, maybe he does get something right… I have to admit that one of the things that every so often makes me wonder if I’m completely misguided, and maybe there is a lot more value to strings/SUSY/branes/extra dimensions etc. than I think, is reading rants like Unzicker’s.

So, my strong advice would be to do your best to ignore this. Luckily, there’s an infinitely better book coming out here in the US at the same time: Jim Baggott’s Farewell to Reality, which I highly recommend. It seems likely that the two books will get reviewed together, giving Unzicker far more attention than he deserves. If so, at least this will provide a real-life experiment indicating whether book reviewers can tell sense from nonsense.

Posted in Book Reviews | 34 Comments

Where are we heading?

Every summer the IAS in Princeton runs a program for graduate students and postdocs called “Prospects in Theoretical Physics”. It’s going on now, with this year’s topic LHC Physics. Much of the program is devoted to the important but complex technical issues of extracting physics from LHC data. Things began though with a talk on Where are we heading? from Nati Seiberg designed to explain to students how they should think about the significance of the LHC results and where they were taking the field.

Most of the talk was about the hierarchy problem and “naturalness”, with the forward-looking conclusion the same one that Seiberg’s colleague Arkani-Hamed has been aggressively pushing: the main significance of LHC results will be telling us that the world is either “natural” (likely by discovering SUSY) or “unnatural” (in which case there’s a multiverse and it’s hopeless to even try to predict SM parameters). Given the negative results about SUSY so far, this conclusion pretty much means that the students at the IAS are being told that the LHC results mean it’s the multiverse, and they shouldn’t even think about trying to figure out where the SM comes from since that’s a lost cause. The talk ends with the upbeat claim that this is a “win-win situation”: reaching the conclusion that the LHC has shown we can’t learn more about where the SM came from will be a great scientific advance and “The future will be very exciting!”. Seiberg does at one point make an interesting comment that indicates that he’s not completely on-board with this conclusion. He notes that there’s a “strange coincidence” that theorists are making this theoretical argument about the necessity of giving up at just exactly the same time in our history that we have run out of technological ability to explore shorter distances. A “strange coincidence” indeed…

For more conventional wisdom along these lines, see Naturally Unnatural from Philip Gibbs, which also argues that what we are learning from the LHC is that we must give up and embrace the multiverse.

Frank Wilczek has just made available on his web-site a new paper on Multiversality. It has the usual arguments for the multiverse, although unlikes Seiberg/Arkani-Hamed he doesn’t try to claim that this is an exciting positive development, closing with a “lamentation”:

I don’t see any realistic prospect that anthropic or statistical selection arguments – applied to a single sample! – will ever lead to anything comparable in intellectual depth or numerical precision to the greatest and most characteristic achievements of theoretical physics and astrophysics…

there will be fewer accessible features of the physical world for fundamental theory to target. One sees these trends, for example, in the almost total disconnect between the subject matter of hep-th and hep-ex.

and a “warning

There is a danger that selection effects will be invoked prematurely or inappropriately, and choke off the search for deeper more consequential explanations of observed phenomena. To put it crudely, theorists can be tempted to think along the lines “If people as clever as us haven’t explained it, that’s because it can’t be explained – it’s just an accident.”

He does see possibilities for understanding more about the SM in two places, the SUSY GUT unification of couplings and axions as an explanation of the smallness of the QCD theta parameter. The last part of the paper is about axion cosmology and anthropics. Wilczek has written about the stories of the 1981 origin of the SUSY GUT unification argument and the 1975 birth of the axion. It’s striking that we’re 32 and 38 years later without any idea whether these ideas explain anything. A depressing possible answer to “Where are we heading?” would be an endless future of multiverse mania, with a short canonical list of ancient, but accepted ideas about fundamental theory (SUSY Guts, string theory, axions) that can never be tested.

Posted in Multiverse Mania | 52 Comments

News From All Over

  • I confess to mostly finding “philosophy of physics” arguments not very helpful for understanding anything, but for those who feel differently, some new things to look at are a Scientific American article Physicists Debate Whether the World is Made of Particles or Fields or Something Else Entirely, an interview with Jonathan Bain, an interview with Tim Maudlin, a debate between John Ellis, Lawrence Krauss and theologian Don Cupitt about Why is there something rather than nothing?, and the talks at a UCSC Philosophy of Cosmology Summer School. Since the last of these was funded by the Templeton Foundation, it ended with several talks on “Implications of cosmology for the philosophy of religion”. These included a detailed argument that the explanation for the laws of nature is “there is a perfect being”, contrasting this to another argument favored at the Summer School “the multiverse did it”.
  • This week the Perimeter Institute will host Loops 13, devoted to loop quantum gravity and other quantum gravity approaches. While it’s also funded by Templeton, the organizers seem to have managed to keep God out of this one.
  • At CERN, Amplitudes, Strings and Branes is on-going. Philip Gibbs has an amusing argument that this and Loops 13 are The Same Bloody Thing.
  • One thing the LQG and Amplitudes people do share is that some of their most important ideas come from the same person: Roger Penrose (who, but the way, would be a good candidate for the Fundamental Physics Prize, although his distaste for string theory might be a disqualifier). There’s a long interview with him at The Ideas Roadshow, mainly about his “Cyclic Universe” ideas.
  • The Simons Foundation has been publishing some excellent science reporting, and now has an online publication they’re calling Quanta Magazine. The latest story there is a very good piece on the search for dark matter from Jennifer Ouellette. The Simons Center at Stony Brook now has a newsletter about their activities.
  • Another on-going conference is one of the big yearly HEP conferences, EPS HEP 2013 in Stockholm. CMS and LHCb have impressive new results about rare B decays, timed for this conference. For the details, see Tommaso Dorigo. There are also CMS and CERN press releases.

    Last year similar but less accurate results were advertised as putting SUSY “in the hospital”, which some people objected to, on the grounds that it was already in trouble and this kind of result doesn’t make things much worse. Resonaances had the details, summarizing this a “another handful of earth upon the coffin”. The CERN press office tries to put the best SUSY spin on this that it can:

    One popular theory is known as supersymmetry, SUSY for short. It postulates the existence of a new type of particle for every Standard Model particle we know, and some of these particles would have just the right properties to make up a large part of the dark universe. There are many SUSY models in circulation, and SUSY is just one of many theoretical routes to physics beyond the Standard Model. Today’s measurements allow physicists to sort between them. Many are incompatible with the new measurements, and so must be discarded, allowing the theory community to work on those that are still in the running.

  • Finally, for those with mathematical interests who have waded through the above, Terry Tao has a remarkable long expository piece about the Riemann hypothesis, ranging from analytic number theory aspects through the function field case and l-adic cohomology.

Update: For more from Penrose, see this recent talk in Warsaw.

Update: Studies in History and Philosophy of Modern Physics is planning a special issue on the significance of the Higgs discovery, the call for papers is here.

Posted in Uncategorized | 21 Comments