Susskind: String theory not a complete picture of how quantum gravity works

For the latest on quantum gravity, readers might want to look at talks from some events of the last couple weeks. At the new ICTP-SAIFR theoretical physics institute in Sao Paulo, a school on quantum gravity has talks available here, with a follow-up workshop here. At Stanford last week the topic was Frontiers of Quantum Gravity and Cosmology, in honor of Renata Kallosh and Stephen Shenker.

Matt Strassler was at the Stanford conference, and he blogs about it here, describing most of the speakers as “string theorists” who are no longer working on string theory, and most of the quantum gravity talks as not being about string theory (this is also true of the ICTP-SAIFR workshop). I don’t really understand his comment

Why has the controversy gone on so long? It is because the mathematics required to study these problems is simply too hard — no one has figured out how to simplify it enough to understand precisely what happens when black holes form, radiate particles, and evaporate.

since the problem isn’t “too hard” mathematics, but the lack of a consistent theory (which he makes clear later in the posting).

Most remarkably, he described the talk by Lenny Susskind, one of the leading promoters of string theory, as follows:

Susskind stated clearly his view that string theory, as currently understood, does not appear to provide a complete picture of how quantum gravity works. Well, various people have been saying this about string theory for a long time (including ‘t Hooft, and including string theory/gravity experts like Steve Giddings, not to mention various experts on quantum gravity who viscerally hate string theory). I’m not enough of an expert on quantum gravity that you should weight my opinion highly, but progress has been so slow that I’ve been worried about this since around 2003 or so. It’s remarkable to hear Susskind, who helped invent string theory over 40 years ago, say this so forcefully. What it tells you is that the firewall puzzle was the loose end that, when you pulled on it, took down an entire intellectual program, a hope that the puzzles of black holes would soon be resolved. We need new insights — perhaps into quantum gravity in general, or perhaps into string theory in particular — without which these hard problems won’t get solved.

For many years now, the most influential figures in string theory have given up on the idea of using it to say anything about particle physics, and results from the LHC have put nails in that coffin, removing the small remaining hope that SUSY or extra dimensions would be seen at the TeV scale. The “firewall” paradox seems to have made it clear that string theory-inspired AdS/CFT doesn’t resolve the problem of non-perturbative quantum gravity, leading to renewed interest in other approaches. This leaves string theory now as just a “tool” to be used to study topics like heavy-ion physics. Things don’t seem to be working out very well there either.

Posted in Uncategorized | 16 Comments

Trust the math?

The last few days have seen some new revelations about the NSA’s role in compromising NIST standard elliptic curve cryptography algorithms. Evidently this is an old story, going back to 2007, for details see Did NSA Put a Secret Backdoor in New Encryption Standard? from that period. One of the pieces of news from Snowden is that the answer to that question is yes (see here):

Classified N.S.A. memos appear to confirm that the fatal weakness, discovered by two Microsoft cryptographers in 2007, was engineered by the agency. The N.S.A. wrote the standard and aggressively pushed it on the international group, privately calling the effort “a challenge in finesse.”

The NIST has now, six years later, put out a Bulletin telling people not to use the compromised standard (known as Dual_EC_DRBG), and reopening for public comment draft publications that had already been reviewed last year. Speculation is that there are other ways in which NIST standard elliptic curve cryptography has been compromised by the NSA (see here for some details of the potential problems).

The NSA for years has been pushing this kind of cryptography (see here), and it seems unlikely that either they or the NIST will make public the details of which elliptic curve algorithms have been compromised and how (presumably the NIST people don’t know the details but do know who at the NSA does). How the security community and US technology companies deal with this mess will be interesting to follow, good sources of information are blogs by Bruce Schneier and Matthew Green (the latter recently experienced a short-lived fit of idiocy by Johns Hopkins administrators).

The mathematics being used here involves some very non-trivial number theory, and it’s an interesting question to ask how much more the NSA knows about this than the rest of the math community. Scott Aaronson has an excellent posting here about the theoretical computation complexity aspects, which he initially ended with advice from Bruce Schneier: “Trust the math.” He later updated the posting saying that after hearing from experts he had changed his mind a bit, and now realized there were more subtle ways in which the NSA could have made number-theoretic advances that could give them unexpected capabilities (beyond the back-doors inserted via the NIST).

Evidently the NSA spends about $440 million/year on cryptography research, about twice the total amount spent by the NSF on all forms of mathematics research. How much they’re getting for their money, and how deeply involved the mathematics research community is are interesting questions. Charles Seife, who worked for the NSA when he was a math major at Princeton, has a recent piece in Slate that asks: Mathematicians, why are you not speaking out?. It asks questions that deserve a lot more attention from the math community than they have gotten so far.

Knowledgeable comments about this are welcome, others and political rants are encouraged to find somewhere else. There’s a good piece on this at Slashdot

Posted in Uncategorized | 30 Comments

Perimeter Institute and the crisis in modern physics

Maclean’s has been publishing a very nice series of articles about Perimeter Institute by Paul Wells. These include one about Jacob Barnett, a 15 year-old who is now studying in a master’s level graduate program (Perimeter Scholars International) there. Another piece, about other students in the program, is here. It discusses one somehow oddly familiar story, of a “young man with dark hair…seems too cool for school”, born in Iran, but educated in Canada, on his way to a promising career in particle theory, Nima Afkhami-Jeddi. There’s also yet another piece, with a wonderful description of the bistro at Perimeter.

In the most scientifically substantive piece, entitled Perimeter Institute and the crisis in modern physics, Wells describes PI director Neil Turok’s welcome speech this year. Here are some quotes from Turok:

Theoretical physics is at a crossroads right now…In a sense we’ve entered a very deep crisis.

You may have heard of some of these models…There’ve been grand unified models, there’ve been super-symmetric models, super-string models, loop quantum gravity models… Well, nature turns out to be simpler than all of these models.

If you ask most theorists working on particle physics, they’re in a state of confusion.

The extensions of the standard model, like grand unified theories, they were supposed to simplify it. But in fact they made it more complicated. The number of parameters in the standard model is about 18. The number in grand unified theories is typically 100. In super-symmetric theories, the minimum is 120. And as you may have heard, string theory seems to predict 10 to the power of 1,000 different possible laws of physics. It’s called the multiverse. It’s the ultimate catastrophe: that theoretical physics has led to this crazy situation where the physicists are utterly confused and seem not to have any predictions at all.

The data just fits so perfectly with Perimeter’s mission. If it had turned out to be complicated and messy — 10 new particles at CERN and all kinds of funny evidence for models of inflation and stuff in the sky — one would have to say the future of theoretical physics does look pretty messy and complicated. Perimeter would be just one of 100 such institutes.

But given that everything turned out to be very simple, yet extremely puzzling — puzzling in its simplicity — it’s just perfect for what Perimeter’s here to do. We have to get people to try to find the new principles that will explain the simplicity

Turok’s perspective on the current situation is great to hear. It’s wonderful to see this kind of admission that the evidence is now in that particle theory has been barking up the wrong tree, coupled with a vigorous position that looking for new principles is where the future lies. My only comment would be that Turok might want to think about bringing in to Perimeter more mathematicians, since if physicists are going to look for new principles, they might need some new mathematics.

For another similar take on the current state of theoretical physics as it faces up to the fact that our simplest theories of particle physics and cosmology are working all too well, see Adrian Cho’s Boxed In at Science magazine.

In the US, HEPAP was meeting last week to discuss the Snowmass workshop and the process for going forward with recommendations about the future of HEP. There was a report from the DPF Panel on the Future of High Energy Theory. It had nothing about the intellectual crisis that Turok and others see in the field, with the only crisis addressed the difficult budget situation, leading to cuts in grants. The panel recommends that theorists continue to get two full months of summer salary, and argues that “salary caps” limiting the size of these payments should not be lowered.

Update: Physics World has something about this, with the headline Perimeter Institute welcome speech reignites the string wars.

Posted in Uncategorized | 50 Comments

Assorted News

  • This past weekend I was up in Boston and attended quite a few talks at the Gelfand Centennial conference at MIT, in honor of the 100th anniversary of I. M. Gelfand’s birth. Abstracts of the talks are available, but most of them were blackboard talks, not being recorded as far as I could tell. I’ve been starting again on my project to learn more number theory, so found Matt Emerton’s and Akshay Venkatesh’s survey talks especially helpful.
    There was one long afternoon program of recollections of Gelfand and his seminar from a long list of speakers, which went on into the evening banquet. This was being recorded, so video will perhaps appear some day (Gindikin’s contribution was on video, available here). Another long afternoon session dealt with Gelfand’s mathematical legacy, again perhaps at some point there will be video available of this.
  • In mathematical news, speakers at next year’s ICM have now been announced, for both the plenary and the various sections. Those interested in tea-leaf reading can consider for themselves what this new information says about who will get a Fields Medal next year. They might also appreciate this.
  • A Fields Medal is worth just 15,000 Canadian dollars. If you can claim some relation to physics, much better to have your friends get to work nominating you for a $3 million fundamental physics prize. Online nominations for 2014 are here, and the news is that the three finalists for the $3 million will be announced this November. The Selection committee will be the 11 previous theorist winners of the $3 million prize plus three LHC physicists from the experimental side. The FPP also has some news here about what some of the LHC experimentalist prize winners have done with the money.
  • Historically unparalleled payments to the stars of the field seem to be part of a larger societal pattern, as well as a much grimmer picture for young non-stars. The situation on the theorist side is not news, but Adrian Cho at Science magazine has a story about the extremely ugly job prospects facing young LHC experimentalists, with the title After the LHC, the Deluge.
  • In case you weren’t aware of this, see here for an explanation of why The STEM Crisis is a Myth. One thing in that article I’d never seen before is Alan Greenspan’s explanation of why we need more H1B visas: the inequality problem in the US is due to overpaid computer programmers, and these plutocrats can be dealt with by importing low-wage labor to take their jobs.
  • Finally, for the latest in multiverse mania, New Scientist has Death by Higgs rids cosmos of space brain threat (and an editorial about how this shows the Higgs is not “boring”). I knew there was no way they could resist Sean Carroll’s new paper dealing with the question: Can the Higgs Boson Save Us From the Menace of the Boltzmann Brains?. Sean has more about this here, and Jacques Distler has a discussion here which I think accurately reflects the views of physicists outside certain West Coast enclaves:

    Normally, I wouldn’t touch a paper, with the phrase “Boltzmann brains” in the title, with a 10-foot pole. And anyone accosting me, intent on discussing the subject, would normally be treated as one of the walking undead…

    This is plainly nuts.

    I confess that this kind of thing completely mystifies me. Carroll is an intelligent, well-informed, and almost always reliably sensible sort, with a keen devotion to the battle for scientific rationality against the forces of religion and obscurantism. But he likes to pair this with an enthusiasm for pseudo-scientific multiverse wackiness that Distler’s “nuts” describes pretty well. Very weird, and if you want to know why I keep referring to “mania” in this context, this is a good example.

Posted in Multiverse Mania | 42 Comments

SUSY 2013

The big yearly SUSY conference, SUSY 2013 has been going on in Trieste this past week. From the experimentalists, the news is just stronger limits: no hint of SUSY anywhere in the LHC data. From the theorists, the reaction to this news has been pretty consistent: despite what people say, not a problem.

According to John Ellis, everything is fine, with MSSM SUSY preference for a Higgs below 130 GeV vindicated and successful SUSY predictions for the Higgs couplings (that they should be the same as if there were no SUSY). According to Ellis, we just need to be patient, and he has CMSSM fits preferring 2 TeV gluinos.

However, if you look at Savas Dimopoulos’s talk the MSSM gets a grade of D-. He argues that the LHC has shown us that the answer is the Multiverse, and that split SUSY with its fine-tuning gets a grade of A. The grade inflation in particle physics is pretty dramatic: you now can get an A without your theory having the slightest bit of experimental evidence.

Nima Arkani-Hamed’s talk was about SUSY in 2033, which in his vision will be pretty much the same as SUSY in 2010. Remember all those things the LHC was supposed to find but didn’t? Well, now the argument is that they’re really there, but we will need a 100 TeV collider to see them. If all goes well, in 2033 such a machine will be under construction, and SUSY 2033 could feature all the SUSY 2010 talks retreaded, with 1 TeV gluinos moved up to 10 TeV.

One of Arkani-Hamed’s slides makes me worry that the LHC results have caused him to begin to lose his marbles. He claims that if one doesn’t see new physics like SUSY at the 100 TeV machine, in his view

this would be 100 times more shocking and dramatic than no nothing but Higgs at the LHC

[I initially misread and misquoted this, my apologies. The claim was not that no BSM at 100 TeV would be 100 times more shocking than no Higgs at the LHC, but that it would be 100 times more shocking than no BSM at the LHC. So, the usual sort of over-the-top exaggeration, but not crazy.]

Even wilder claims came form Gordon Kane in his talk, where we’re told that particle theorists giving “negative” talks because of the LHC results have:

no knowledge of LHC physics, Higgs physics, supersymmetry, phenomenology, etc.

According to Kane, we not only have seen the tip of the iceberg of a unified string/M-theory, but actually have the whole iceberg. The ingredients are all in place for what he sees as a similar experience to the 3 year period in the 1970s when the Standard Model emerged and was experimentally vindicated.

Tommaso Dorigo points out that there was one SUSY 2013 talk that in his humble opinion was a good candidate for the IgNobel, see here (warning, NSFW).

On a more positive note, at the conference production of compactified Calabi-Yaus was finally conclusively demonstrated.

Update: Nathaniel Craig has some recent lectures on The State of Supersymmetry after Run I of the LHC. The emphasis is on examining the consequences of failure of pre-LHC assumptions about SUSY based on simplicity and naturalness. Out of 60 or so pages, only one is devoted to the models favored by Arkani-Hamed and Dimopoulos, string-theory based models are not even mentioned.

Posted in Uncategorized | 20 Comments

Two Cultures

There are two workshops going on this week that you can follow on video, getting a good idea of the latest discussions going on at two different ends of the spectrum of particle theory in the US today.

At the KITP in Santa Barbara there’s Black Holes: Complementarity, Fuzz or Fire?. As far as I can tell, what’s being discussed is the black hole information paradox reborn. It all started with Joe Polchinski and others last year arguing that the consensus that AdS/CFT had solved this problem was wrong. See Polchinski’s talk for more of this argument from him.

If thinking about and discussing deep conceptual issues in physics without much in the way of mathematics is your cup of tea, this is for you (and so, I fear, not really for me). As a side benefit you get to argue about science-fiction scenarios of whether or not you’d get incinerated falling into a black hole, while throwing around the latest buzz-words: holography, entanglement, and quantum information. If you like trendy, and you don’t like either deep mathematics or the nuts and bolts of the experimental side of science, it doesn’t get much better than this. One place you can follow along the latest is John Preskill’s Twitter feed.

Over on the other coast, at the opposite intellectual extreme of the field, LHC phenomenologists are meeting at the Simons Center this week at a SUSY, Exotics and Reaction to Confronting Higgs workshop. They’re discussing very much those nuts and bolts, those of the current state of attempts to analyze LHC data for any signs of something other than the Standard Model. Matt Strassler is there, and he is providing summaries of the talks at his blog (see here and here) At this workshop, still no deep mathematics, but extremely serious engagement with experiment. One thing that’s apparent is that this field of phenomenology has become a much more sober business than a few years ago, pre-LHC, and pre-no evidence for SUSY. Back then workshops like this featured enthusiastic presentations about all the wonderful new particles, forces and dimensions the LHC was likely to find, with one of the big problems being discussed the “LHC inverse problem” of how people were going to disentangle all the complex new physics the LHC would discover. Things have definitely changed.

One anomaly at the SEARCH workshop was Arkani-Hamed’s talk on naturalness, which started off in a promising way as he said he would give a different talk than his recent ones, discussing various ideas about solving the naturalness problem (though they didn’t work, but might be inspirational). An hour later he was deep into the same generalities and historical analogies about naturalness as in other talks, headed into 15 minutes of promotion of anthropics and the multiverse. He ended his trademark 90 minute one-hour talk with a 15 minute or so discussion of a couple failed ideas about naturalness, and for these I’ll refer you to Matt here.

Arkani-Hamed and others then went into a panel discussion, with Patrick Meade introducing the panelists as having “different specialties, ranging from what we just heard to actually doing calculations and things like this.”

Update: Scott Aaronson now has a blog posting about the KITP workshop here.

Update: A summary of the situation from John Preskill is here.

Posted in Uncategorized | 70 Comments

Belief in multiverse requires exceptional vision

Tom Siegfried at Science News has a new piece about how Belief in multiverse requires exceptional vision that starts off by accusing critics of multiverse mania of basically being ignoramuses who won’t accept the reality of anything they can’t see with their own eyes, like those in the past who didn’t believe in atoms, or superstrings:

If you can’t see it, it doesn’t exist. That’s an old philosophy, one that many scientists swallowed whole. But as Ziva David of NCIS would say, it’s total salami. After all, you can’t see bacteria and viruses, but they can still kill you.

Yet some scientists still invoke that philosophy to deny the scientific status of all sorts of interesting things. Like the theoretical supertiny loops of energy known as superstrings. Or the superhuge collection of parallel universes known as the multiverse.

It’s the same attitude that led some 19th century scientists and philosophers to deny the existence of atoms.

The problem with the multiverse of course is not that you can’t directly observe it, but that there’s no significant evidence of any kind for it: it’s functioning not as a testable scientific explanation, but as an excuse for the failure of ideas about unification via superstring theory. Siegfried makes this very clear, with his argument specifically aimed at those who deny the existence of “supertiny loops of energy known as superstrings”, putting such a denial in the same category as denying the existence of atoms. Those who deny the existence of superstrings don’t do so because they can’t see them, but because there’s no scientific evidence for them and no testable predictions that would provide any.

Siegfried has been part of the string theory hype industry for a long time now, and was very unhappy with my book, which he attacked in the New York Times (see here) as misguided and flat-out wrong for saying string theory made no predictions. According to him, back in 2006:

…string theory does make predictions — the existence of new supersymmetry particles, for instance, and extra dimensions of space beyond the familiar three of ordinary experience. These predictions are testable: evidence for both could be produced at the Large Hadron Collider, which is scheduled to begin operating next year near Geneva.

We now know how that turned out, but instead of LHC results causing Siegfried to become more skeptical, he’s doubling down, with superstring theory now accepted science and the multiverse its intellectual foundation.

The excuse for Siegfried’s piece is the Wilczek article about multiverses that I discussed here, where I emphasized only one part of what Wilczek had to say, the part with warnings. Siegfried ignores that part and based on Wilczek’s enthusiasm for some multiverse research takes him as a fellow multiverse maniac and his article as a club to beat those without the exceptional vision necessary to believe in superstrings and the multiverse. Besides David Gross, I’m not seeing a lot of prominent theorists standing up to this kind of nonsense, leaving those invested in failed superstring ideology with the road clear to turn fundamental physics into pseudo-science, helped along by writers like Siegfried.

Update: A commenter points to this from Wilczek, noting his lesser multiverse enthusiam than Siegfried’s.

Update: Ashutosh Jogalekar at The Curious Wavefunction has a similar reaction to the Siegfried piece.

Update: There’s an FQXI podcast up now (see here), with Wilczek discussing the multiverse.

Posted in Multiverse Mania | 73 Comments

Quick Links

  • At HEP blogs you should be reading already, there’s Tommaso Dorigo on 5 sigma (with more promised to come), and Jester on the lack of a definite BSM energy scale. Jester puts his finger on the big problem facing HEP physics. In the past new machines could be justified since we could point to new phenomena that pretty much had to turn up in the energy range being opened up by the machine (Ws and Zs at the SPS, the top at the Tevatron, the Higgs at the LHC). Now though, there’s nothing definite to point to as likely to show up at the energy scale of a plausible next machine. Jester includes a graphic from a recent Savas Dimopoulos talk characterizing the current situation in terms of chickens running around with their heads cut off, which seems about right.
  • The black hole information paradox has been around for nearly forty years, with the story 10 years ago that it supposedly had been resolved by AdS/CFT and string theory. For the past year or so arguments have been raging about “firewalls” and a version 2.0 of the paradox, which evidently now is not resolved by AdS/CFT and string theory. I couldn’t tell if there was much to this argument, but the fact that there’s a Lubos rant about how it’s all nonsense made me think maybe there really is something to it. As usual though, my interest in quantum gravity questions that have nothing to say about unification is limited. For those with more interest in this, I’ll just point to today’s big article in the New York Times, and next week’s workshop at KITP where the latest iterations will get hashed out. For more on the challenge this argument poses to the idea that AdS/CFT gives a consistent picture of quantum gravity, see this recent talk by Polchinski.
  • For another challenge to orthodoxy from someone at UCSB, Don Marolf has a new preprint out arguing that strings are not needed to understand holography:

    Stringy bulk degrees of freedom are not required and play little role even when they exist.

Posted in Uncategorized | 9 Comments

Never Give Up

Alok Jha has a piece in the Guardian yesterday about the failure to find SUSY. His conclusion I think gets the current situation right:

Or, as many physicists are now beginning to think, it could be that the venerable theory is wrong, and we do not, after all, live in a supersymmetric universe.

An interesting aspect of the article is that Jha asks some SUSY enthusiasts about when they will give up if no evidence for SUSY appears:

[Ben] Allanach says he will wait until the LHC has spent a year or so collecting data from its high-energy runs from 2015. And if no particles turn up during that time? “Then what you can say is there’s unlikely to be a discovery of supersymmetry at Cern in the foreseeable future,” he says.

Allanach has been at this for about 20 years, and here’s what he has to say about the prospect of failure:

If the worst happens, and supersymmetry does not show itself at the LHC, Allanach says it will be a wrench to have to go and work on something else. “I’ll feel a sense of loss over the excitement of the discovery. I still feel that excitement and I can imagine it, six months into the running at 14TeV and then some bumps appearing in the data and getting very excited and getting stuck in. It’s the loss of that that would affect me, emotionally.”

John Ellis has been in the SUSY business even longer, for 30 years or so and he’s not giving up:

Ellis, though confident that he will be vindicated, is philosophical about the potential failure of a theory that he, and thousands of other physicists, have worked on for their entire careers.

“It’s better to have loved and lost than not to have loved at all,” he says. “Obviously we theorists working on supersymmetry are playing for big stakes. We’re talking about dark matter, the origins of mass scales in physics, unifying the fundamental forces. You have to be realistic: if you are playing for big stakes, very possibly you’re not going to win.”

But, just because you’re not going to win, that doesn’t mean you have to ever admit that you lost:

John Ellis, a particle theorist at Cern and King’s College London, has been working on supersymmetry for more than 30 years, and is optimistic that the collider will find the evidence he has been waiting for. But when would he give up? “After you’ve run the LHC for another 10 years or more and explored lots of parameter space and you still haven’t found supersymmetry at that stage, I’ll probably be retired. It’s often said that it’s not theories that die, it’s theorists that die.”

There may be a generational dividing line somewhere in the age distribution of theorists, with those above a certain age likely to make the calculation that, no matter how bad things get for SUSY and string theory unification, it’s better to go to the grave without admitting defeat. The LHC will be in operation until 2030 or so, and you can always start arguing that 100 TeV will be needed to see SUSY (see here), ensuring that giving up won’t ever be necessary except for those now still wet behind the ears.

For another journalist’s take on the state of SUSY, this one Columbia-centric and featuring me as skeptic, see here.

Posted in Uncategorized | 60 Comments

The Next Machine

For the last week or so US HEP physicists have been meeting in Minneapolis to discuss plans for the future of  US HEP.  Some of the discussions can be seen by looking through the various slides available here.   A few days earlier Fermilab hosted TLEP13, a workshop to discuss plans for a new very large electron-positron machine.   There is a plan in place (the HL-LHC) for upgrading the LHC to higher luminosity, with operations planned until about 2030.   Other than this though, there are no current definite plans for what the next machine at the energy frontier might be. Some of the considerations in play are as follows:

  • The US is pretty much out of the running, with budgets for this kind of research much more likely to get cut than to get the kinds of increases a new energy frontier machine would require. Projects with costs up to around $1 billion could conceivably be financed in coming years, but for the energy frontier, one is likely talking about $10 billion and up.
  • Pre-LHC, attention was focused on prospects for electron-positron linear colliders, specifically the ILC and CLIC projects. The general assumption was that LEP, which reached 209 GeV in 2000, was the last circular electron-positron collider. The problem is that, at fixed radius, synchrotron radiation losses grow as the fourth-power of the energy, and LEP was already drawing a sizable fraction of the total power available at Geneva. Linear accelerators don’t have this problem, but they do have problems achieving high luminosity since one is not repeatedly colliding the same stored bunches.

    The hope was that the LHC would discover not just the Higgs, but all sorts of new particles. Once the mass of such new particles was known, ILC or CLIC technology would give a design of an appropriate machine to study such new particles in ways that not possible at a proton-proton machine. These hopes have not worked out so far, making it now appear quite unlikely that there are such new particles at ILC/CLIC accessible energies. It remains possible that the Japanese will decide to fund an ILC project, even without the appealing target of a new particle besides the Higgs to study.

  • The LHC has told us the Higgs mass, making it now possible to consider what sort of electron-positron collider would be optimal for studying the physics of the Higgs, something one might call a “Higgs factory”. It turns out that a center of mass energy of about 240 GeV is optimal for Higgs production. This is easily achievable with the ILC, but since it is not that much higher than LEP, there is now interest in the possibility of a circular collider as a Higgs factory. There is a proposal called LEP3 (discussed on this blog here) for putting such a collider in the LHC tunnel, but it is unclear whether such a machine could coexist with the LHC, and no one wants to shutdown the LHC before a 2030 timescale.
  • Protons are much heavier than electrons, so synchrotron radiation losses are not the problem, but the strength of the dipole magnets needed to keep them in a circular orbit is. To get to higher proton-proton collision energies in the same tunnel, one needs higher strength magnets, with energy scaling linearly with field strength. The LHC magnets are about 8 Tesla, current technology limit is about 11 Tesla for appropriate magnets. The possibility of an HE-LHC, operating at 33 TeV with 20 Tesla magnets is under study, but this technology is still quite a ways off. Again, the time-scale for such a machine would be post-2030.
  • The other way to get to higher proton-proton energies is to build a larger ring, with energy scaling linearly with the size of the ring (for fixed magnet strength). Long-term thinking at CERN now seems to be focusing on the construction of a much larger ring, of size 80-100 km. One could reach 100 TeV energies with either 20 Tesla magnets and an 80 km ring, or 16 Tesla magnets and a 100 km ring (such a machine is being called a VHE-LHC). If such a tunnel were to be built, one could imagine first populating it with an electron-positron collider, and this proposal is being called TLEP. It would operate at energies up to 350 GeV and would be an ideal machine for precision studies of the Higgs. It could also be used to operate at very high luminosity at lower energies, significantly improving on electroweak measurements made at LEP (the claim is that LEP-size data sets could be reproduced in each 15 minutes of running). Optimistic time-lines would have TLEP operating around 2030, replaced by the VHE-LHC in the 2040s.
  • For more about TLEP, see the talks here. The final talk of the TLEP workshop wasn’t about TLEP, but Arkani-Hamed on the VHE-LHC (it sounds like maybe he’s not very interested in the Higgs factory idea). He ends with

    EVERY student/post-doc/person with a pulse (esp. under 35) I know is ridiculously excited by even a glimmer of hope for a 100 TeV pp collider. These people don’t suffer from SSC PTSD.

    Looking at the possibilites, I do think TLEP/VHE-LHC looks like the currently most promising route for the future for CERN and HEP physics (new technology might change this, i.e. a muon collider). Maybe I don’t have a pulse though, since I can’t say that I’m ridiculously excited by just a glimmer of VHE-LHC hope for a time-frame past my life-expectancy.

    A 100 km tunnel would be even larger than the planned SSC tunnel (89 km) and one doesn’t have to suffer from SSC post-traumatic-stress-disorder to worry about whether a project this large can be successfully funded and built (In very rough numbers I’d guess one is talking about costs on the scale of $20 billion). My knowledge of EU science funding issues is insufficient to have any idea if the money for something on this scale is a possibility. On the other hand, with increasing concentration of all wealth in the hands of an increasingly large number of multi-billionaires, perhaps this just needs the right rich guy for it to happen.

    Someone is going to have to do a better job than Arkani-Hamed in terms of finding an argument that will sell this to rest of the scientific community. His main argument is that such a machine would allow us to improve the ultimate LHC number of “fine-tuning” being at least 10-2 to a number like 10-4, or maybe finally see some SUSY particles. I don’t think this argument is going to get $20 billion: “we thought we’d see all this stuff at the LHC because we were guessing some number we don’t understand was around one. We saw nothing and turns out the number is small, no bigger than one in a hundred. Now we’d like to spend $20 billion to see if it’s smaller than one in a hundred, but bigger than one in ten thousand.”

Posted in Experimental HEP News | 21 Comments