Official announcements won’t come out until the Moriond conference first week of March, but reliable rumors are starting to trickle out about what the Higgs news will be. ATLAS will report (based on about 21 fb-1 of 8 TeV data + the 2011 7 TeV data) that the gamma-gamma excess has gone down slightly (from 1.8 to 1.65 times the SM value). Still about 1.5 standard deviations high, but this isn’t encouraging if you want something that disagrees with the SM.
At the AAAS 2013 meeting in Boston this past week, a press conference was held to update the media on the Higgs. What the media got from the press conference was the news that the Higgs may spell doom, unless supersymmetry saves us. This isn’t just doom for HEP physics research, it’s doom for the entire universe:
“At some point, billions of years from now, it’s all going to be wiped out…. The universe wants to be in a different state, so eventually to realise that, a little bubble of what you might think of as an alternate universe will appear somewhere, and it will spread out and destroy us,” Lykken said at AAAS.
This is based on a renormalization group calculation extrapolating the Higgs effective potential to its value at energies many many orders of magnitude above LHC energies. To believe the result you have to believe that there is no new physics and we completely understand everything exactly up to scales like the GUT or Planck scale. Fan of the SM that I am, that’s too much for even me to swallow as plausible.
If you are being kept awake by the Higgs metastability issue, you’ll want to know the Higgs mass as accurately as possible. The rumor from ATLAS is that the difference in best fit masses between the gamma-gamma and ZZ channels has narrowed, with gamma-gamma moving up slightly to 126.8 GeV, ZZ quite a bit, to 124.3 GeV.
Knowing the Higgs mass as accurately as possible will require a lepton collider, either e+e- ILC or a muon collider, or both. Hadron machines can do search and discovery, and they can access states which lepton colliders cannot directly produce, but high precision spectroscopy requires lepton colliders.
“that’s too much for even me to swallow as plausible”
Shaposhnikov and Wetterich managed to *predict* the Higgs mass in 2009, based on the assumption of a grand desert, and some Planck-scale boundary conditions. Doesn’t that count for something?
Mitchell Porter,
That the boundary condition of the quartic Higgs coupling going to zero gives the right Higgs mass definitely counts as at least intriguing, if not more. That doesn’t though imply enough understanding of what is going on at those energy scales to believe the metastability calculation. It also doesn’t imply that it’s a good idea for a theorist sharing a press conference about the Higgs with two LHC experimenters to start jabbering about the annihilation of the universe, ensuring that’s the story the press covers…
In order to understand the significance of the new measurement, we need to know not only its central value but also its uncertainty. One would expect the uncertainty to go down with more data. How did you compute the 1.5 sigma excess you report ?
Shaposhnikov and Wetterich are assuming asymptotic safety (i.e. subtle hypothetical new physics justified on quantum gravity grounds), which from a practical nuts and bolts perspective is just another way of saying the the running of the coupling constants and masses with energy scale is slightly different from a Standard Model beta function decrees, at many, many orders of magnitude above anything that could ever be recreated physically, based on a handful of low energy data points empirically in a model that acknowledges that it does not consider quantum gravity in any respect.
Generically, any high energy scale tweak to one or more of the beta functions of the Standard Model is going to have the same potential to resolve the metastability issue, and given the closeness of the Standard Model result to true stability, the nudge need only be every so slight.
I’m slightly puzzled by this stance. Normally, our host doesn’t like going BSM without evidence and argues that there hasn’t been any evidence turned up yet for anything BSM. Here, from what I can discern, the calculation at issue is “radically conservative” in the sense that it takes the SM as gospel truth and extrapolates it as far as possible. That sounds like exactly the sort of exercise you’d want to do with a theory you were pretty confident about. I mean, extending quantum mechanics to black holes is a pretty big reach beyond existing data, but that activity doesn’t seem to have gotten similar flak around here, presumably because people are very confident about QM. What am I missing?
Curlo,
The uncertainty has gone down, but so has the deviation from the SM value. At least that’s the rumor….
Peter,
Any word on the ATLAS “Twin Peaks”?
Thanks for the updates.
Excited for Moriond,
P
P,
See the last paragraph of the posting…
Dear Peter,
You said you’d write about the new Arkani-Hamed et al paper on supersymmetry. I hope you haven’t forgotten.
Thanks a lot for your updates.
Best,
Pi
If gamma-gamma excess has gone down from 1.8 to 1.65 times the SM value, it means that the excess in the new data set is considerably lower, right? Isn’t that what you would except from a statistical fluke?
Pi, Nima doesn’t have a new paper on supersymmetry, does he? Or do you many the paper with many other people on the Grassmannian in scattering amplitudes?
MathPhys,
Here it is: http://arxiv.org/abs/arXiv:1212.6971
Simply Unnatural Supersymmetry.
Meanwhile, you might find this interesting: https://twitter.com/grahamfarmelo/status/304313270511214592
Peter,
do the rumors imply that the option that the Higgs was just a statistical fluke is gone forever? After all, the bump is very small compared to the background. My colleagues say, behind closed doors, that even if the Higgs were a statistical fluke, human nature would prevent from admitting it. This is because of the French and Italian concept of “honour”, they say.
But is the data strong enough now for a Nobel prize? Until now, the number of events was extremely small, and there easily could be some selection bias. You once mentioned in your blog that “half of ATLAS was sleeping with half of CMS”…
> This is because of the French and Italian concept of “honour”, they say.
What!
1) This is not a collaboration by French and Italian research groups only.
2) We are not talking about Imperial Japan, Prussia Gone Bad or US politicians. We are talking about responsible people able and willing to check their ideas and data against an actually existing oracle over long timescales.
3) The fact that the FTL neutrino results were thoroughly verified and ultimately retracted empirically proves this bizarro idea wrong.
Paolo,
The evidence for the Higgs is overwhelming. There are large, well-defined peaks undeniably visible in two different channels, at the two different experiments. Any one of these four peaks would be quite convincing evidence that something is there, seeing four independent ones at close to the same mass removes any possible doubt. It may have even gotten to the point that the experiments will stop reporting combined statistical significance numbers for their Higgs results, since the number would be something large and uninteresting.
As Yatima points out, whatever the problem caused by the honorable French and Italians, there are plenty of dishonorable Americans and others to make up for it. As for confirmation bias and bed partners, the phenomenon of people taking delight in the prospect of proving that their nearest and dearest is wrong, and they are right, is not exactly unknown….
Pingback: Higgs Update | Not Even Wrong
Pingback: Albert Tollkuci Blog | Higgs boson explained (and more)