The hundreds of expository articles about supersymmetry written over the last twenty years or more tend to begin by giving one of two arguments to motivate the idea of supersymmetry in particle physics. The first of these goes something like “supersymmetry unifies bosons and fermions, isn’t that great?” This argument doesn’t really make a whole lot of sense since none of the observed bosons or fermions can be related to each other by supersymmetry (basically because there are no observed boson-fermion pairs with the same internal quantum numbers). So supersymmetry relates observed bosons and fermions to unobserved, conjectural fermions and bosons for which there is no experimental evidence.
Smarter people avoid this first argument since it is clearly kind of silly, and use a second one: the “fine-tuning” argument first forcefully put forward by Witten in lectures about supersymmetry at Erice in 1981. This argument says that in a grand unified theory extension of the standard model, there is no symmetry that can explain why the Higgs mass (or electroweak symmetry breaking scale) is so much smaller than the grand unification scale. The fact that the ratio of these two scales is so small is “unnatural” in a technical sense, and its small size must be “fine-tuned” into the theory.
This argument has had a huge impact over the last twenty years or so. Most surveys of supersymmetry begin with it and it justifies the belief that supersymmetric particles with masses accessible at the LHC must exist. Much of the experimental program at the Tevatron and the LHC revolve around looking for such particles. If you believe the fine-tuning argument, the energy scale of supersymmetry breaking can’t be too much larger than the electroweak symmetry breaking scale, i.e. it should be in the range of 100s of Gev- 1 Tev or so. Experiments at LEP and the Tevatron have ruled out much of the energy range in which one expects to see something and the fine-tuning argument is already at the point of starting to be in conflict with experiment, for more about this, see a recent posting by Jacques Distler.
Last week at Davis I was suprised to hear Lenny Susskind attacking the fine-tuning argument, claiming that the distribution of possible supersymmetry breaking scales in the landscape was probably pretty uniform, so there was no reason to expect it to be small. He believes that the anthropic explanation of the cosmological constant shows that the “naturalness” paradigm that particle theorists have been invoking is misguided, so there is no valid argument for the supersymmetry breaking scale to be low.
I had thought this point of view was just Susskind being provocative, but today a new preprint appeared by Nima Arkani-Hamed and Savas Dimopoulos entitled “Supersymmetric Unification Without Low Energy Supersymmetry and Signatures for Fine-Tuning at the LHC“. In this article the authors go over all the problems with the standard picture of supersymmetry and describe the last twenty-five years or so of attempts to address them as “epicyclic model-building”. They claim that all these problems can be solved by adopting the anthopic principle (which they rename the “structure” or “galactic” or “atomic” principle to try and throw off those who think the “anthropic” principle is not science) to explain the electroweak breaking scale, and assuming the supersymmetry breaking scale is very large.
It’s not suprising you can solve all the well-known problems of supersymmetric extensions of the standard model by claiming that all effects of supersymmetry only occur at unobservably large energy scales, so all we ever will see is the non-supersymmetric standard model. By itself this idea is as silly as it sounds, but they do have one twist on it. They claim that even if the supersymmetry breaking scale is very high, one can find models where chiral symmetries keep the masses of the fermionic superpartners small, perhaps at observably low energies. They also claim that in this case the standard calculation of running coupling constants still more or less works.
The main experimental argument for supersymmetry has always been that the running of the three gauge coupling constants is such that they meet more or less at a point corresponding to a unification energy not too much below the Planck scale, in a way that works much better with than without supersymmetry. It turns out that this calculation works very well at one-loop, but is a lot less impressive when you go to two-loops. Read as a prediction of the strong coupling constant in terms of the two others, it comes out 10-15% different than the observed value.
I don’t think the argument for the light fermionic superpartners is particularly compelling and the bottom line here is that two of the most prominent particle theorists around have abandoned the main argument for supersymmetry. Without the pillar of this argument, the case for supersymmetry is exceedingly weak and my guess is that the whole idea of the supersymmetric extension of the standard model is now on its way out.
One other thing of note: in the abstract the authors refer to “Weinberg’s successful prediction of the cosmological constant”. The standard definition of what a prediction of a physical theory is has now been redefined down to include “predictions” one makes by announcing that one has no idea what is causing the phenomenon under study.