The write-up of Larry McLerran’s summary talk at Quark Matter 2006 has now appeared. This talk created a bit of a stir since McLerran was rather critical of the way string theorists have been overhyping the application of string theory to heavy-ion collisions.
McLerran explains in the last section of his paper the main problem, that N=4 supersymmetric Yang-Mills is a quite different theory than QCD, listing the ways in which they differ, then going on to write:
Even in lowest order strong coupling computations it is very speculative to make relationships between this theory and QCD, because of the above. It is much more difficult to relate non-leading computations to QCD… The AdS/CFT correspondence is probably best thought of as a discovery tool with limited resolving power. An example is the eta/s computation. The discovery of the bound on eta/s could be argued to be verified by an independent argument, as a consequence of the deBroglie wavelength of particles becoming of the order of mean free paths. It is a theoretical discovery but its direct applicability to heavy ion collisions remains to be shown.
McLerran goes on to make a more general and positive point about this situation:
The advocates of the AdS/CFT correspondence are shameless enthusiasts, and this is not a bad thing. Any theoretical physicist who is not, is surely in the wrong field. Such enthusiasm will hopefully be balanced by commensurate skepticism.
I think he’s got it about right: shameless enthusiasm has a legitimate place in science (as long as it’s not too shameless), but it needs to be counterbalanced by an equal degree of skeptical thinking. If shameless enthusiasts are going to hawk their wares in public, the public needs to hear an equal amount of informed skepticism.
Another shamelessly enthusiastic string theorist, Barton Zwiebach, has been giving a series of promotional lectures at CERN entitled String Theory For Pedestrians, which have been covered over at the Resonaances blog.
Zwiebach’s lectures are on-line (both transparencies and video), and included much shameless enthusiasm for the claims about AdS/CFT and heavy-ion physics that McLerran discusses. His last talk includes similar shameless enthusiasm for studying the Landscape and trying to get particle physics out of it. He describes intersecting D-brane models, making much of the fact that, after many years of effort, people finally managed to construct contrived (his language, not mine, see page 346 of his undergraduate textbook) models that reproduce the Standard Model gauge groups and choices of particle representations. Besides the highly contrived nature of these models, one problem with this is that it’s not even clear one wants to reproduce the SM particle structure. Ideally one would like to get a slightly different structure, predicting new particles that would be visible at higher energies such as will become available at the LHC. Zwiebach does admit that these contrived constructions don’t even begin to deal with supersymmetry-breaking and particle masses, leaving all particles massless.
He describes himself as not at all pessimistic about the problems created by the Landscape, with the possibility that there are vast numbers of models that agree to within experimental accuracy with everything we can measure, thus making it unclear how to predict anything, as only “somewhat disappointing”. He expects that, with input from the LHC and Cosmology, within 10 years we’ll have “fully realistic” unified string theory models of particle physics.
The video of his last talk ran out in the middle, just as he was starting to denounce my book and Lee Smolin’s, saying that he had to discuss LQG for “sociological” reasons, making clear that he thought there wasn’t a scientific reason to talk about it. I can’t tell how the talk ended; the blogger at Resonaances makes a mysterious comment about honey…
Finally, it seems that tomorrow across town at Rockefeller University, Dorian Devins will be moderating a discussion of Beyond the Facts in Sciences: Theory, Speculation, Hyperbole, Distortion. It looks like the main topic is shameless enthusiasm amongst life sciences researchers, with one of the panelists the philosopher Harry Frankfurt, author of the recent best-selling book with a title that many newspapers refused to print.
Update: Lubos brings us the news that he’s sure the video of the Zwiebach lectures was “cut off by whackos” who wanted to suppress Zwiebach’s explanation of what is wrong with LQG.
Update: CERN has put up the remaining few minutes of the Zwiebach video.
Urs and Robert,
You both insist on discussing features of string theory backgrounds that are known not to correspond to the real world. The real world is not N=8 supersymmetric. It’s highly misleading when you promote string theory by promoting aspects of it that are known to be completely inconsistent with its use as a unified theory that corresponds to the real world.
Robert,
right. I vaguely indicated that there is more to the effective string background action than just being a form of GR coupled naturally to various sorts of matter. As you indicate, it is not just any coupling of dilaton, p-forms, spinors, etc to gravity, but a very particular one.
We are just ansering a technical question:
“In which sense is a point in the landscape like a solution of Einstein’s equations.”
Answer: “From the point of the perturbative formalims: in every sense.”
Clarifying this technical question is independent of evaluating its implications for phenomenological physics.
I think in this thread neither Robert, Aaron nor me has done any promotion. I need to promote string theory no more than I need to promote, say, the tricategory 3Grp.
In order to clarify facts (facts like: “what is a point in the landscape, technically“) it helps to pause for a while with promoting this or that.
Maybe Peter is right and there is no point of discussing technical facts about theories. Maybe it’s better to make just claims and not discuss if details of these claims are right or wrong.
But still, just for the record, since there still seems to be some confusion: You have to differentiate between the theory (i.e. the equations of motion) and the state. All we are saying is that there are certain parts of the theory that are considered to be well understood (here: corrections to the classical equations of motion) while Peter keeps saying “but you don’t know the state” which is true but irrelevant for the first question as we try to argue. In addition, I see no reason why the world cannot be in a state with completely broken supersymmetry of a theory which has N=8 susy.
All the trouble with (absence of practically performable) experimental checks of string theory is that generically low energy questions depend crucially on the state (which as I said is unknown) while high energy questions (e.g. Regge scaling, higher curvature corrections) tend to be less dependant on the vacuum you are in.
Robert,
I never said there is no point in discussing technical facts about theories. My only point is that if you are going to make claims about how the equations of string theory are “natural” and much like the equations of GR, you need to make clear if you are talking about some approximate version of string theory that can be rigorously shown to have nothing to do with the real world, or about a version of string theory that is a candidate for a unified theory.
Urs did helpfully reference a precise source for the equations he was discussing as “natural”. I pointed out to him that those equations contain explicitly an undefined term the authors call “S_loc” where they put in ad hoc contributions which are necessary to stabilize moduli, and which I don’t believe can reasonably be called “natural”.
As for you argument that string theory predictions at high energy don’t depend on the state, I don’t buy it. There seems to be an ideology among string theorists that the vacuum state, which governs what we can actually m easure, is a non-perturbative phenomenon (and thus we don’t know what it is), but that higher energy states (at energies we have no hope of probing) can be studied perturbatively. I see no reason for this to be true about the universe other than wishful thinking.
There seems to be an ideology among string theorists that the vacuum state, which governs what we can actually measure, is a non-perturbative phenomenon (and thus we don’t know what it is), but that higher energy states (at energies we have no hope of probing) can be studied perturbatively. I see no reason for this to be true about the universe other than wishful thinking.
Why not? It’s true of plenty of field theories. A lot of the nonperturbative stuff invoked for moduli stabilization is really just ordinary field theory (e.g. gluino condensation).
onymous,
It’s not true for theories like QCD where the vacuum state is a truly non-perturbative phenomenon, and you can’t understand the higher energy states just using perturbation theory.
There are other arguments for why you won’t necessarily see 10d weakly-coupled string behavior (Regge trajectories) at high energy. What about M-theory? What about black holes?
The problem is that, not knowing what non-perturbative string theory is, string theorists are invoking it when perturbative string theory gives them something that disagrees with observation, then saying it can be ignored when they claim they can make predictions (safe from ever be confronted with experiment) at high energy.
“It’s not true for theories like QCD where the vacuum state is a truly non-perturbative phenomenon, and you can’t understand the higher energy states just using perturbation theory.”
The high energy behavior of QCD isn’t given by perturbation theory?
Really?
On the question of whether Lorentz invariance is necessary for string theory, here’s what Smolin says in Ch 14 of The Trouble with Physics, where he talks about the possible breakdown of special relativity:
“String theory predicts that no matter how distant their sources are from one another, photons of different frequencies travel at the same speed. As we have seen, string theory does not make many predictions, but this is one; in fact, it’s the only prediction of string theory that can be tested by present technology… [[Interesting that Smolin wrote this prior to the recent announcements about the possibility of testing string theory]] …
“Would string theory survive…? Certainly all known string theories would be proved false, since they depend so heavily on special relativity holding. But might there still be a version of string theory that could be consistent with [the breakdown of Lorentz invariance]? Several string theorists have insisted to me that even if special relativity were seen to break down or be modified, there might someday be invented a form of string theory that could accommodate whatever the experiments see… There are a lot of ways to change the background of string theory so that there is a preferred state of rest… What puzzles me is why string theorists think this helps their cause… To me it’s more an indication that string theory is unable to make any predictions…”
Well, not being a specialist in this field, I must say that while I appreciate Smolin at least addressing the question directly (which hardly anyone else seems willing to do), the answer is slightly odd, because Smolin has worked in the area of string theory, and yet his only way of even trying to answer this fundamental question is to refer to what some un-named string theorists have insisted to him. The way he phrases it, it seems clear that he himself doesn’t feel qualified to answer this question, although he does add the remark that “there are a lot of ways to change the background so there is a preferred frame”. If this is true, then why does he need to rely on the un-named string theorists to “insist” to him that it is possible?
I suppose this just illustrates that one can’t ever really falsify a research program, since a program can also change course and accommodate new information. Theories may be falsified, but research programs just fade away if and when people lose interest. Still, I find it troubling that even respected specialists in string theory don’t know such a fundamental thing about their theory, as whether or not it must be Lorentz invariant.
One other comment: Smolin says the known versions of string theory ASSUME Lorentz invariance. He does not say they predict it, nor have I ever heard anyone else claim that string theory predicts Lorentz invariance. We don;t start with a tabula rosa, and miraculously deduce Lorentz invariance from the string principle (or some such thing). Rather, Lorentz invariance is simply assumed, as a postulate. Hopefully it’s clear that the possibility of falsifying a postulate or assumption of a theory is quite different from falsifying a prediction of a theory. Likewise, when one says “string theory makes no falsifiable predictions” this is quite different from saying “the known string theories don’t assume any physical principles that can be falsified”. The former is the charge of the critics of string theory, whereas Distler et al are really only addressing the latter, which is at best only tangentially relevant to string theory.
Anon,
Yes, really. The spectrum of states in QCD above the ground state consists of protons, neutrons, pions, etc., not free quarks and gluons. This spectrum is the analog of the Regge trajectories people claim that string theory predicts.
Asymptotic freedom and the ability to use perturbation theory to study scattering in certain high energy regimes is a different issue.
The spectrum of states in QCD above the ground state consists of protons, neutrons, pions, etc., not free quarks and gluons. This spectrum is the analog of the Regge trajectories people claim that string theory predicts.
In fact, the spectrum of QCD does at least approximately consist of Regge trajectories, if you go to large N_c. But real-world QCD has 3 colors, and violations of quark-hadron duality die off: at large s, rho(s) ~ constant. So in real-world QCD, perturbation theory does work at least for many quantities at large momentum, whether Euclidean or Minkowski.
(This is not to say that perturbation theory is any good for things like the total proton-proton cross section at high energy, of course. But you said the spectrum.)
anon,
I was giving QCD as an example of a QFT where non-perturbative QFT determines both the ground state and the states above the ground state. Perturbative QCD just doesn’t work at all here, for either the ground state, or for the states above the ground state (these states cannot be understood in terms of gluons and free quarks).
Whether QCD states can be understood in terms of perturbative string theory is a completely different issue. The problem there is not that you don’t know the ground state because it is determined non-perturbatively, the problem is that you don’t know what the string theory is.
By “higher energy states”, I assumed you meant “states well above the QCD scale”, since that’s what the other Anon seemed to be talking about. And perturbation theory is perfectly good there, in a precise sense.
The states in QCD are, obviously, created by gauge-invariant operators acting on the vacuum. A two-point function of gauge-invariant operators can be expressed in terms of a spectral function rho(s). For s >> Lambda_QCD, perturbation theory calculates this up to small corrections by analytically continuing the Euclidean result and taking the discontinuity across the cut.
Of course resonances appear at small s and are not calculable in perturbation theory. But large s is perturbative. So it is not true that perturbative QCD doesn’t tell you about states above the ground state, if you go to sufficiently high invariant mass.
anon,
I don’t think
“the vacuum state is a truly non-perturbative phenomenon, and you can’t understand the higher energy states just using perturbation theory.”
or
“The spectrum of states in QCD above the ground state consists of protons, neutrons, pions”
was in any way ambiguous. I was talking about the states of the theory above the ground state, precisely the analog of the low-lying states in Regge trajectories that string theorists always say are a prediction of string theory, despite the non-perturbative nature of its ground state. The question of what happens at very large invariant mass is a quite different one. I was obviously not making claims that there is no kinematical regime in which perturbation theory is useful in QCD, but was referring explicitly to a regime where it clearly isn’t.
Again, I seem to never be able to get string theorists to address this point: if the ground state is determined by some unknown non-perturbative string theory mechanism, why do string theorists often claim that they can reliably use perturbative string theory to predict the energies of the low-lying states (the ones lying on a Regge trajectory) above the ground state?
Strong feeling of deja vu, but let me try…if the string coupling is zero there are a bunch of massless modes (moduli) and a bunch of massive modes lying on Regge trajectories, giving characterisitc soft scattering etc. etc., This is the picture to zeroth order in the string coupling.
\
Working in small but non-zero coupling and including the tiny corrections due to instantons, gluino condensation, small fluxes etc. all those masses shift a tiny bit. This does not change the picture much, except that the would be massless modes are now massive, with a tiny mass compared to the string scale.
\
The analogy to QCD is misleading since the coupling is small, and consequently all non-perturbative effects are tiny. The vacuum and all massive modes are still approximately the perturbative ones.
Thanks Moshe,
One comment is just that your argument assumes very small string coupling, for generic values of the string coupling there’s no reason to believe that Regge trajectories etc. persist.
But I’d really like to understand exactly what the status of the KKLT backgrounds is. Besides the effects you mention, they need to invoke the introduction of a specific set of anti-D3 branes. With these, is everything really under control? Gross seems to be fond of claiming (see his remarks in Toronto a couple weeks ago) that since one doesn’t understand non-perturbative string theory, there is no reason to believe that these constructions give you consistent vacuum states, especially in a cosmological context. Is he wrong? If he’s right, and non-perturbative effects can destabilize these states, how do you know they won’t destabilize higher states?
I’m willing to accept that it’s possible that string theorists can come up with a self-consistent scenario involving vacuum states whose properties are under control, modulo some assumptions about the true non-perturbative theory, and in such a scenario you get Regge trajectories, etc. But even so, this is just going to then be a small corner of the landscape. Generically you’re not going to have perturbative control, so can’t claim that “if we just had a big enough accelerator, string theory predicts we’d see Regge trajectories”.
I am not sure what Gross referred to, I would have to look at the talk. It is true that one is always limited to constructions with calculational control. In such perturbative constrcutions Regge behaviour is a robust feature. Even if there are non-perturbative corrections invalidating the claims about the vacuum, the Regge behavior would still be there as long as the coupling is small.
Moshe,
I’m not sure I’m convinced that if non-perturbative corrections make your vacuum state no longer a vacuum state they won’t have significant effects on higher energy states. And, if unknown non-perturbative effects need to be understood to find the true vacuum state, in general I see no reason why they might not affect the Regge behavior of higher states.
Gross’s comments were in the context of a public debate over anthropics. He said what he has said in many other talks: no reason to accept the landscape since we don’t know what string theory really is, suspect that space and time are emergent phenomena but don’t know what they really are, don’t understand time-dependent backgrounds necessary for a consistent cosmology.
You may want to listen to Nima Arkani-Hamed’s Perimeter seminar of Feb 8, where the claim is that given the small positive cosmological constant and the neutrino masses where they are, and the Standard Model in 3+1 dimensions, there is a landscape of vacua with one dimension compactified. i.e., given the Standard Model Lagrangian, you cannot compute the ground state.
Arun,
I did watch the beginning of that talk, but didn’t have time to watch the whole thing. Not clear to me what the point was, perhaps that you can ruin the Standard Model and make it as useless as string theory by replacing our 3 large flat spatial dimensions by various compactifications. Just seems like a good demonstration of why you shouldn’t study these kinds of backgrounds.
Peter,
My initial reaction was the same as yours. But!
If the arguments hold up, it means if given } a 3+1D QFT + General Relativity + small positive cosmological constant + massless/light degrees of freedom like the Standard Model (i.e., the photon, graviton and light neutrinos)}, then that model has an infinite number of compactified vacuums, apart from the 3 large flat spatial dimensions vacuum. I presume that the only reason we didn’t see this before is because we didn’t look, we assumed that the 3 large flat spatial dimensions vacuum is the natural and the only vacuum solution.
I also gathered that this is not a generic problem with QFT + GR, but rather a problem with models that are like the Standard Model at low energy and that have a small positive cosmological constant. In other words, we are fine-tuned to have a landscape in the Standard Model!
Without any reference to string theory, the Standard Model + GR has a landscape problem. Of course, the anthropic solution is simple in this case, it involves only one choice, we pick the vacuum with the large flat spatial dimensions, because that is the one we live in, and that is that. But unless there is a physical/mathematical argument that rules out the compactified vacua, philosophically, we are not very far from the string theory landscape issue.
-Arun
‘… the Standard Model + GR has a landscape problem. Of course, the anthropic solution is simple in this case, it involves only one choice, we pick the vacuum with the large flat spatial dimensions, because that is the one we live in, and that is that. But unless there is a physical/mathematical argument that rules out the compactified vacua, philosophically, we are not very far from the string theory landscape issue.’
I disagree; the case of knowing the right solution from observations (three flat spatial dimensions) is not an anthropic selection. That’s observational input, not anthropic reasoning.
Anthropic solutions are entirely different. Suppose a metastable vacuum which produces the SM is discovered in the string theory landscape. You would defend that solution by the anthropic argument, not because you have directly observed the Calabi-Yau manifold on the Planck scale and determined the right solution by direct observation.
Anthropic solutions are based on indirect reasoning not directly observing the type of dimensions. I.e., Hoyle said predicted the cross-section for the triple alpha fusion to create carbon because he ruled out other paths and knew carbon exists, not because he observed the triple alpha process in the lab.
There’s no similarity between the anthropic selection of a SM from the string theory landscape, and the landscape of GR or SM where you aren’t using the anthropic principle to select a solution, you are using direct empirical evidence about the dimensions etc. to choose a solution.
“I disagree; the case of knowing the right solution from observations (three flat spatial dimensions) is not an anthropic selection. That’s observational input, not anthropic reasoning.”
To me the “observational input” argument sounds just like the anthropic agrument. Here is why: We know the right solution from observations because we can make those observations in the first place. If the SM vacuum were AdS_3XS_1 instead of dS_4 we would not be here to make those observatons and use the three flat spatial dimentions as the input.
In other words, the “observational input” argument still DOES NOT EXPLAIN “Why?” the SM vacuum is dS_4 and not AdS_3XS_1. “Observational input” must have an explanation, don’t you think?
According to Nima, the SM Largangian plus Gravity with a tiny CC has a huge landscape of vacua. Finding a selection mechanism, other than “observational input”, such that the dS_4 vacuum is selected by dynamics would be a much more compelling answer.
I agree with “Q” that the GR and SM solutions are based on direct empirical observation, not on indirect anthropic reasoning. On the other hand, I think Arun’s comment highlights the need for one other ingredient, namely, Occam’s razor. We can’t claim that if there were compactified dimensions we would be able to directly observe them. If they existed, we would only be able to infer their existence indirectly, by the effect(s) they have on observable physics. Now, we have not, so far, observed any physical phenomena that imply the existence of compactified dimensions, but it doesn’t follow that we can confidently extrapolate this to make predictions about new (previously unexamined) phenomena. I guess what Arun is saying is that, lacking some kind of generic argument ruling out compactified dimensions, the ability of GR/SM to make predictions outside out existing base of experience is placed in doubt. I understand that argument, but I don’t think it carries much weight against GR/SM, because they adhere fairly closely to Occam’s razor, i.e., they propose the simplest vacuum consistent with observation. Granted, we could postulate more complicated vacuua with unseen dimensions, but we choose the simplest one that works. This choice is fairly unambiguous, and although it is vulnerable to falsification (i.e., we can’t be absolutely certain that the results extrapolate into new territory), this falsifiability is a good thing. Each time GR and/or SM avoid falsification in a new observation, it is justifiably counted as a success for those theories.
In contrast, if we have a theory that is intrinsically committed to the existence of many unobserved compactified dimensions, but which provides no unambiguous criteria for specifying those dimensions, then we are in a much more problematical position, because we are precluded from applying Occam’s razor to simplfy down to a falsifiable theory. We face a hugh number of highly complex theories, with no clear way to choose between them. This is especially disappointing considering that the original excitement surrounding string theory was that it seemed to be leading toward a unique mathematically consistent theory. I know some people still hold out hope that a unique solution may emerge (a position that Susskind has characterized as “faith-based”), but as it stands today, it seems that the most salient attribute of string theory is its NON-uniqueness.
Arun,
I still think that trying to claim that the most successful physical theory ever developed, which makes an absurdly large number of very non-trivial predictions which all agree completely with experiment, is on the same footing as a theory which predicts nothing is just silly sophistry.
I’ve made this point repeatedly, but here it is again in this context: the point is that the SM vacuum state is exceedingly simple and symmetric. It is not some random background of huge complexity based on choosing a complicated compactification, fluxes, D-branes, etc., etc. It is this simplicity that makes the theory predictive. There would be a compelling string model worth taking seriously if a simple choice of background led to something that didn’t disagree with what we know about physics. The problem with string theory is that getting it to look like the real world involves adding in a Rube Goldberg-like array of complicated mechanisms designed to evade the fact that simple mechanisms don’t work. This is the hallmark of what happens when you start with a wrong theory and try and make it agree with experiment.
Sure, you can ruin the predictivity of the Standard Model and make it not agree with experiment by putting it in a complicated background. So what???
“I don’t think it carries much weight against GR/SM, because they adhere fairly closely to Occam’s razor, i.e., they propose the simplest vacuum consistent with observation.”
SM +GR do not propose/select the vacuum. The observational fact that we live in dS_4 is used as input but not explained. To me, the Occam’s razor argument would select a 4D Minkowski vacuum as the simplest, not a dS with a tiny cosmological constant – indeed, a very unnatural choice!
“Sure, you can ruin the predictivity of the Standard Model and make it not agree with experiment by putting it in a complicated background. So what???”
Peter, what exactly do you mean by the “complicated” background?
Is 4d Minkowski less complicated than dS_4? So, why do we live in dS _4 and not in Minkowski vacuum? The “absurdly large number of very non-trivial predictions which all agree completely with experiment” would be “ruined” if our vacuum were AdS_3XS_1 instead of dS_4, according to Nima. The point is that there is no dynamical mechanism in the SM + GR which would select the vacuum. So, the ony option so far is to use the “observational”/anthropic argument to “explain” it.
Correction:
The “absurdly large number of very non-trivial predictions which all agree completely with experiment” would NOT be “ruined” if our vacuum were AdS_3XS_1 instead of dS_4, according to Nima.
student,
As I wrote, I didn’t watch all of Arkani-Hamed’s talk, and I still have no idea whether he had a more serious point to make than the silly one about “the standard model has a landscape problem just like string theory” which is all I was getting from Arun’s summary.
deSitter space is not that much less simple than Minkowski space, (it’s also a homogenous space).
Peter,
The statement that “the standard model has a landscape problem” is not silly. Nima’s discovery that the SM is just as happy to live in AdS_3XS_1 as opposed to dS_4 with a very tiny CC begs for an explanation as to why we ended up in the latter. To me, a 4d Minkowski vacuum seems to be a more “simple”/”natural” outcome than either of the two choices above. This “SM landscape” problem may be in the same category as the “string landscape” problem. It is the question of identifying a correct vacuum selection mechanism.
There is nothing “silly” about this question. If one can find such a mechanism for the SM+GR, the same mechanism may apply for the string landscape.
student,
The “string landscape problem” is just caused by the fact that people have taken an idea that doesn’t work (string theory based unification), added huge amounts of structure to it to evade the fact that it doesn’t work, and now are trying to find some way out of this mess other than the obvious one (admit this doesn’t work and try something else).
There certainly are specific choices that go into the standard model that distinguish it from other equally simple theories: why the specific dS cosmological background we are in (rather than AdS3xS1 or whatever)? Why SU(3)xSU(2)XU(1)?, etc. If you can solve any of these problems you’ll be very famous. And not because you’ve figured out something that may help understand the string theory landscape…
student,
I don’t understand what you mean when you say GR does not propose a vacuum (specifically 3+1 dimensional spacetime, with a cosmological constant very close to zero). In alleged contra-distinction to this, you assert that “The observational fact that we live in dS_4 is used as input but not explained.” Well, that’s true, but it doesn’t conflict with what I said. It’s just repeating what I said. To “propose” is not to “explain”. And we’re in agreement that this proposal is based on observational facts. Indeed this is the whole point: the attributes of a theory like GR can be pinned down rather fully by observable facts. GR doesn’t contain a large amount of free structure that is unconstrained by observable facts. I think Peter’s point is that, in contrast, string theory has taken on a great deal of structure that is unconstrained by observable facts. This is what makes it problematic.
You wrote:
“To me, the Occam’s razor argument would select a 4D Minkowski vacuum as the simplest, not a dS with a tiny cosmological constant -indeed, a very unnatural choice!”
Well, if 4D Minkowski spacetime was sufficient to save all the phenomena, I imagine it would be the preferred model. But it is pretty much impossible to reconcile the equivalence principle with mass-energy equivalence in the context of 4D Minkowski spacetime, so these observationally-based principles motivate a more general spacetime, and this generalization allows for the possibility of a non-zero cosmological. The economy, logical coherence, and connection to observable fact of general relativity is a far cry from the extravagant constructions of string theory.
Having said that, there’s no denying that general relativity is an incomplete theory, as Einstein himself often emphasized. The stress-energy tensor is really (at best) just a place holder for some more meaningful representation of the non-gravitational constituents. Also, being a differential theory, GR obviously requires the specification of boundary conditions of some sort, and the imposition of energy conditions, etc., things the theory itself doesn’t seem to constrain. So nobody would argue that GR is the last word. But I do think it presents a striking contrast to string theory.
“The ‘string landscape problem’ is just caused by the fact that people have taken an idea that doesn’t work (string theory based unification), added huge amounts of structure to it to evade the fact that it doesn’t work”
The existence or nonexistence of a large number of vacua of a theory is a property of the theory, not of the motivations of those studying it. If I understand correctly, Nima claims that the existence of a large number of vacua is a property that the SM+GR shares with string theory.
If so, then you can’t wish the landscape away, by blaming it on the nasty string theorists.
‘The point is that there is no dynamical mechanism in the SM + GR which would select the vacuum. So, the ony option so far is to use the “observational”/anthropic argument to “explain” it.’ – Student, February 11th, 2007 at 2:54 pm
Observational facts are just not anthropic arguments, and shouldn’t be confused with them. The string landscape effort is based on neither observational facts nor anthropic predictions. The anthropic argument says that one set of observed facts (eg, the existence of carbon) predicts something else (eg, the rate of fusion of three alpha particles into carbon) which can be checked and confirmed in the lab.
For example, Hoyle used the anthropic principle to correctly predict a nuclear energy level at 7.6 MeV, which was then confirmed!
” “Most anthropic insights are made with the benefit of hindsight. We look at the Universe, notice that it is close to flat, and say, ‘Oh yes, of course, it must be that way, or we wouldn’t be here to notice it.’ But Hoyle’s prediction is … is a genuine scientific prediction, tested and confirmed by SUBSEQUENT experiments. Hoyle said, in effect, ‘since we exist, then carbon must have an energy level at 7.6 MeV.’ THEN the experiments were carried out and the energy level was measured. As far as we know, this is the only genuine anthropic principle prediction…”
– http://www.novanotes.com/jan2003/anthro1.htm
Even if a solution to the string theory landscape which was in ad hoc agreement with the SM was found, that is not an anthropic prediction. To get an anthropic prediction, you would have to find a solution to the landscape which is not merely consistent with the SM, but which says something further than can be checked in the lab.
With as many as 10^500 solutions, if any do include the SM, there would have a wide spectrum of nearby solutions which also do that, but which have variations regarding the extra physics they include that can be checked. So the problem will persist that the theory is sufficiently vague (due to having so many slightly different solutions) that it won’t be falsifiable by the method of Hoyle’s successful anthropic prediction.
I think I appreciate better Peter’s and Amos’ points, after some thought. The SM has a few dozen things put in by hand, if we have to add another, a vacuum selection principle, no big deal, we still have a theory from which we get a lot more back than we put in. We have a predictive theory.
If the anthropic principle led to a theory from which we get out more than we put in, I don’t think anyone would be complaining very loudly. However, with the landscape, to put it dramatically, the sum total of human knowledge is insufficient to determine the vacuum.
Anon,
“If so, then you can’t wish the landscape away, by blaming it on the nasty string theorists.”
I don’t wish the landscape away, I’m agnostic about it. Gross thinks the calculations that lead to it are not reliable, other string theorists think they are. My point is that if you believe that you have reliable calculations that show that your theory has extremely complicated “backgrounds” in it, that the simple ones don’t look at all like the real world, so you have to go to very complicated ones just to evade contradiction with experiment, it means your theory is wrong, and scientific ethics say you should abandon it.
This is the landscape problem, and any claim that the SM has the same problem is just sophistry.
“My point is that if you believe that you have reliable calculations that show that your theory has extremely complicated “backgrounds” in it…”
What does “extremely complicated” have to do with it? (“complicated”, being very much in the eyes of the beholder)
I thought the “landscape problem” was that the theory has a large number of vacua. Period. No qualifiers about how “complicated” they are.
Anon,
“I thought the “landscape problem” was that the theory has a large number of vacua. Period. No qualifiers about how “complicated” they are.”
The number of vacua is not really the point, the problem is the nature of the vacua combined with their number. If the theory had one free parameter and you could calculate everything in terms of this parameter, you could say it had “an infinite number of vacua”, but this would not be a problem. You just need one measurement to fix this and get a predictive theory. If there were a small number of complicated vacua, you could separately calculate things in each vacuum, compare to experiment, and see if one matched.
It is the combination of the complexity and the large number of vacua that both allows the string theory landscape to evade being confronted with experiment, and makes the whole setup non-predictive.
And no, “complicated” is not purely in the eye of the beholder.
“It is the combination of the complexity and the large number of vacua …”
I don’t see what the alleged complexity has to do with the issue.
If your theory has a large number of inequivalent vacua and if the low-energy physics is different in these different vacua, then you run the risk of losing predictivity.
I don’t see why it matters whether these vacua are “simple” or “complicated.”
anon,
“If your theory has a large number of inequivalent vacua and if the low-energy physics is different in these different vacua, then you run the risk of losing predictivity.”
You’re ignoring the example I gave of a simple set of an infinite number of vacua. Again, the problem is not inherently the number of vacua, it’s the combination of large number with their complicated structure. Simple versus complicated matters because simple situations are easier to do calculations in than complicated ones.
The other reason “complicated” matters is because “complicated” is what happens when you try and evade failure. If you start out with a simple model, find it disagrees with experiment, then add new more complicated structure to evade this, as you keep doing this it becomes more and more clear that your starting point was wrong. That is exactly what has happened as people have constructed more and more complicated string theory backgrounds.
It’s kind of puzzling that you’re so hostile toward Nima’s talk. I guess you stopped watching before he explained that there are new ways to access deSitter solutions that don’t involve the usual picture of tunneling and eternal inflation?
“Again, the problem is not inherently the number of vacua, it’s the combination of large number with their complicated structure. Simple versus complicated matters because simple situations are easier to do calculations in than complicated ones.”
If you need to compute the physics of 10^500 vacua, before you can determine which one we live in, and extract predictions for low-energy physics, then it really doesn’t matter whether each individual vacuum is “simple” to do calculations in.
Conversely, if you can extract predictions without going to such extraordinary lengths, why does it matter that each individual vacuum is “complicated”?
” … failure …”
I think you are mixing up your personal distaste for anything related to “string theory” with whatever scientific point there is to be made about Nima’s lecture and the landscape.
anon.,
My hostility to the talk was just due to the claims about “the standard model has the same problem as the landscape”. Perhaps there was some actual substance to the talk, I haven’t heard any from the commenters here. A “new way to access deSitter solutions” sounds more substantive, but just isn’t the sort of thing I’m especially interested in, maybe other people are.
Anon.,
You seem to just be unwilling to acknowledge that I’ve given you an example showing that in cases of simple theories a large or infinite number of vacua is not a problem at all. The point is simple: simple theories are simpler to calculate in, for complicated theories, the calculations are, well, complicated…
“You seem to just be unwilling to acknowledge that I’ve given you an example showing that in cases of simple theories a large or infinite number of vacua is not a problem at all.”
Your example, with a continously degenerate vacuum, is in flat contradiction with experiment (because it contains a massless scalar field with dangerous couplings to ordinary matter, exactly as in a string compactification with unfixed moduli). Therefore, it is relevant, neither to the discussion of Nima’s talk in particular nor to the landscape problem in general.
“The point is simple: simple theories are simpler to calculate in, for complicated theories, the calculations are, well, complicated…”
A tautology which, again, is irrelevant to the landscape problem.
Lots of theories have parameters in them, my example is only “in contradiction with experiment” if you want to insist on the string ideology that your unknown non-perturbative string theory will have no free parameters. If you want to make my example consistent with the ideology, just have the free parameter only take discrete values. The issue is whether observable physics depends in a simply calculable way on the parameter, not whether it is continuous or discrete.
Again, the landscape problem is not that there are lots of vacua, as the above example shows, it’s that the vacua are very complicated, it’s difficult to calculate relevant physical observables reliably in them AND they come in essentially infinite variety.
“If you want to make my example consistent with the ideology, just have the free parameter only take discrete values.”
A theory with free parameters (whether or not those parameters take on only discrete values) is not the same thing as a theory with many (discrete) vacua.
So, no, your example doesn’t shed any light on the landscape problem.
“Again, the landscape problem is not that there are lots of vacua, as the above example shows, it’s that the vacua are very complicated,…”
That’s not what everyone else , including Nima, means, when they use the phrase “the landscape problem.”
Thanks for clarifying that thats what you mean.
Anon,
If you don’t like my calling them “free parameters”, replace “free parameters” with “vacua” and read what I have to say again.
When I repeatedly write here that the landscape problem is due to both the number and complexity of the vacua, your deleting one half of this (about the number), putting what I have to say in quotes and saying that my definition of the landscape problem is different than other people’s is just a ridiculous tactic.
Since you seem to be on first-name basis with Nima and invoke him in everyone of your comments, perhaps you might want to contact him and ask him if all there is to the landscape problem is that the number of vacua is large.
Dear Q,
The Hoyle argument is not a “prediction” of the anthropic principle. The Hoyle argument is based on a fallacy in which an extra statement is added to a correct argument, without changing its force. The correct argument is as follows:
A The universe is observed to contain a lot of carbon.
B That carbon could not have been produced were there not a certain line in the spectrum of carbon.
Therefor that line must exist.
To this correct argument Hoyle added a statement that does no work, to get:
U Carbon is necessary for life.
A The universe contains a lot of carbon
B That carbon could not have been produced were there not a certain line in the spectrum of carbon.
Therefor that line must exist.
You see that U does no work. One way to see it is that if the prediction turned out to be wrong one would not question U, one would question the calculations leading to B.
I have found that every single argument proported to be a successful prediction from the AP has a fallacy at its heart. See my hep-th/0407213 for details.
What has been so disheartening about the current debates re the landscape is that all this was thought through a long time ago by a few of us and it has been clear since around 1990 what an appeal to the landscape would have to do to result in falsifiable predictions. The issue is not the landscape per se but the cosmological scenario in which it is studied. The fact that eternal inflation can’t yield anything other than a random distribution on the landscape is the heart of the impass, for that leads to the AP being pulled in in an attempt to save the theory and that in turn leads to a replay of old fallacies.
Thanks,
Lee
‘If you don’t like my calling them “free parameters”, replace “free parameters” with “vacua” and read what I have to say again.’
I did. Now it makes no sense at all.
“Since you seem to be on first-name basis with Nima and invoke him in everyone of your comments, perhaps you might want to contact him and ask him if all there is to the landscape problem is that the number of vacua is large.”
Nima’s argument is that fairly generic theories (like the Standard Model), when coupled to gravity, have a landscape of vacua.
You want to claim that what Nima finds is not a landscape because all those vacua are “simple,” whereas, in the landscape, the vacua are “complicated” (whatever the heck that means).
Clearly that means that you and he disagree about the definition of “landscape.” I’m just pointing out that the majority of high energy theorists, who use the term, are referring to his definition, not yours.
Anon,
I didn’t define what the landscape is (everyone agrees we’re talking about the space of vacua of a theory), I was telling you what the landscape PROBLEM is. You believe the problem is the large number of vacua, not their complex properties, I’m telling you that it is both.
For the N’th time: If the landscape consisted of an infinite number of vacua, labeled by a discrete parameter (say an integer), and we could calculate all physical observables easily in terms of that integer, there would be a landscape, but no landscape problem. You just would have to compare the result of that calculation to observation, and see if, for any integer, the results matched. An infinite number of vacua is not inherently a problem, the problem is whether you are able to calculate things and match to experiment.
I don’t know what Arkani-Hamed’s “landscape of SM vacua” looks like and whether it’s one in which you can easily do calculations or not, and thus whether or not it suffers from the same problem as the string theory landscape does. Whether or not it does, claiming that “the SM has the same landscape problem as string theory” is sophistry.
Dear Lee,
Thank you. As you write, the observation that people exist does not go into Hoyle’s prediction, so the anthropic argument is not involved in the actual prediction.
The observations of the amount of carbon and of the beryllium bottleneck (that prevents significant carbon being formed by reactions other than the fusion of alpha particles) are no more anthropic than other astronomy or nuclear physics observations.
Hoyle’s prediction was claimed to be ‘the only genuine anthropic principle prediction’, according to John Gribbin and Martin Rees’s Cosmic Coincidences, quoted at http://www.novanotes.com/jan2003/anthro1.htm
Your link to http://arxiv.org/abs/hep-th/0407213 is extremely helpful and it surprises me that, in fact, there aren’t any genuine anthropic predictions that have been confirmed. I thought it was just weak physics but not it looks like rubbish. My motto in future will have to be nullius in verba.