Last week at the KITP, Keith Dienes gave a talk on A Statistical Study of the Heterotic Landscape. He gave a good idea of the state of the art of the investigation of the Landscape, focusing on one special type of models, heterotic models. The results he presented gave statistical distributions for just two very crude aspects of these compactifications, their gauge groups and cosmological constants. These models remain highly unrealistic, since the cosmological constants are of order the Planck scale and the compactifications are not stable.
The models studied have gauge groups of rank 22, and while many of them contain the standard model SU(3)xSU(2)xU(1), they also contain many more gauge group factors, with typically not one, but about seven SU(2) factors. These models, with their instabilities, far too large gauge groups and cosmological constants, are extremely far from anything like the standard model. It’s not at all clear what the point is in enumerating them and studying their statistics, but Dienes describes in detail various problems that arise with the whole concept of generating “random” models of this kind and trying to get sensible statistical distributions. He also looks for correlations between gauge groups and cosmological constants, finding that at small cosmological constant one is somewhat more likely to get many factors in the gauge group (although in his case, both the gauge group and the cosmological constant are very different than in the real world).
Despite the very crude state of these calculations, Dienes reports that a group of 17 prominent string theorists have banded together to form the “String Vacuum Project”, with the goal over the next few years of accumulating a database of 10s of billions of string models, with the hope of finding within this mountain of data about 100 models that have crude features of the standard model. I don’t at all see what the point of this is, but it certainly is a computationally intensive project that could keep many people occupied for a long time. It also appears to be just the beginning, with the longer term goal being to devote the next decades to expanding from 10s of billions farther into the 10^500 or whatever exorbitantly large number is thought to be the number of all string models.
The String Vacuum Project submitted a proposal to the NSF last year, which seems to have been turned down, and they appear to be planning to resubmit the proposal. They have a Wiki, with all sorts of details about the project. Most recent additions to the Wiki are from Bert Schellekens in August, who discusses a proposed “String Vacuum Markup Language” (SVML) format, with links to a web-page that produces data in this format for certain sorts of models. There’s also a European String Vacuum Project web-site.
This is tragic and hilarious news. Thanks God I work on quantum computing. We do real experiments
This is sad.
I see, Dieter Luest from munich is associated with this “String Vacuum Project” http://strings0.rutgers.edu:8000/MemberGroup
(even Lisa Randall is, why that? I thought of her not as a String Phaenomenologist).
In Munich theres now a very big string group. Compared to the size of the theoretical phenomenologist groups, a very big group:
http://www.theorie.physik.uni-muenchen.de/~luest/stringgroup.html
Luest was one of the first who pointed out that string theory has many many solutions.
Now, it seems that they led german students into the business of vacuum counting. At exactly the time, when LHC will come up with new data…..
What a boondoggle. I feel like writing my congressman.
And I thought the people who play the Lotto were numerically challenged …
Peter, sometimes your job is too easy.
The SVP wiki page at http://strings0.rutgers.edu:8000/ProjectOverview says in part:
“… Our primary goal is to construct string compactifications which lead to the Standard Model …
A related goal is a broader study of standard-like models, by statistical and other approaches, …
and
… to be able to estimate the number of SM’s which should exist among all known types of constructions …”.
So, if the SVP succeeds, it will not only be the authoritative catalog of superstring models, but will also be THE authoritative catalog of ALL physics models.
Further, the SVP wiki page at http://strings0.rutgers.edu:8000/ProjectOverview/DiscussModelFormat says in part:
“… Suppose one of us, or some independent group of string theorists, establishes the existence of a string compactification leading to a concrete low energy model, and wants to submit it to the SVP database.
What should he/she do?
… the first thing to do is to write a standard research paper and submit it to the arXiv …
… Having done this,
the key thing to do is to provide, or be able to provide, the information about the model in a standard format, which we might refer to as “String Vacuum Markup Language” or SVML . …”.
Suppose somebody, say, Peter working on representation theory, comes up with a model that is consistent with the Standard Model plus Gravity,
and
suppose Peter meets the first requirement of posting his model on the arXiv.
Then,
if the SVP has obtained dominance roughly equivalent to the physics-community dominance of the arXiv,
Peter MUST fit his model into the SVP strait-jacket of “String Vacuum Markup Language” if he wants his model to be considered at all by the physics establishment.
If Peter’s model were to be so unconventional (from a superstring point of view) that it woudl not fit happily into the “String Vacuum Markup Language” strait-jacket, then no matter how realistic and useful it might be, it would never be accepted or used.
Since the SVP proposal at http://www.physics.rutgers.edu/~mrd/SVP-v2.ps says in part
“… A group of theorists in Europe … have come to similar conclusions, and are writing a similar proposal to European agencies. We would join with them to strengthen the entire activity further. …”,
it is clear that the objective of SVP is not merely dominance of the USA theoretical high-energy physics community,
but is to achieve global domination (as is now in fact the case with the arXiv).
Therefore,
I think that the SVP proposal is quite dangerous to the future of physics,
because
it seems to me that gate-keeping by the SVP powers-that-be, using such things as requiring “String Vacuum Markup Language”, will be so restrictive as to make the blacklisting-by-moderators practice of the arXiv seem pale by comparison.
Tony Smith
http://www.valdostamusuem.org/hamsmith/
PS – My opinion in this comment is substantially objective, since the SVP proposal will probably not have much impact on acceptance/rejection of my work because blacklisting by the arXiv has already done the damage to me and my work that might be done by SVP.
This is a very nice project for the following reasons:
1) It can help to demonstrate that string theory can provide “a theory of everything” even if not “the theory of everything”. This will be a great intellectual achievement. (As far as I understand from reading Peter’s book, there is no guarantee that even a single one out of the 10^500 different string theories (or more, infinitely many??) will have the desired properties.)
2) It can go both ways. It can also lead to (mild but important) negative conclusions about string theory. E.g. if certain unrealistic or problematic features cannot be avoided in all the proposed models.
3) It does not have the sex-appeal/snob-appeal of hyper/super 23-century mathematics but is rather a down-to-earth programming-intensive project.
4) Projects of this kind are very demanding and often lead to difficult and thankless work. They are also rather risky.
So based on what Peter wrote I would recommend the NSF to support this project very very strongly.
Thanks to referees and panelists who were able to protect NSF from wasting money on ridiculous String Vacuum Project and String Vacuum Markup Language. You don’t have to be a physicist to recognize that it is complete nonsense. Actually, I don’t believe that theorists who are proposing this activity are acting in good faith — they must know that the only goal would be to keep the ball rolling… The logic is: let’s get postdocs, whatever it takes… but then, would they really hire a programming expert? No, they would try to hire somebody working on more serious stuff listed on “daily specials”=”menu du jour”. They must be completely burned out and desperate or plain delusional like the person who proposed SVML.
That was an amazing wiki. Absolutely no comprehension of the magnitude of the problem. Just trying to figure out what sorts of biases are present in the data would be insane.
If I may be excused for asking a question about the broader issue of vast size of the Landscape:
What is the standard response to the argument that any theory that does not predict a vast number of possible vacuua, but instead picks out a unique set of physical laws, would leave us with the problem of explaining the apparent fine-tuning of the observed set of laws to be consistent with life? From this perspective, the Landscape would seem to be an asset, rather than a liability.
to Farrold: about 10^10 vacua would be enough to interpret the apparent fine-tuning you mention: with intensive computer work one could find the true one. However, if the cosmological constant is small only due to a fine-tuning (that makes it practically uncomputable), about 10^100 vacua are needed, and this already seems a hopeless situation. If you have 10^20 buckets you can empty the ocean, but with 10^500 vacua what can you do? If you find 10^400 realistic vacua, where do you put them?
Despite this, computer tools that manipulate generic Lagrangians are being developed to study LHC data: if after LHC there will be nothing better to do, why not using them to explore a bunch of vacua?
Gina Wrote:
1) It can help to demonstrate that string theory can provide “a theory of everything” even if not “the theory of everything”. This will be a great intellectual achievement. (As far as I understand from reading Peter’s book, there is no guarantee that even a single one out of the 10^500 different string theories (or more, infinitely many??) will have the desired properties.)
Gina:
It isn’t clear to me that finding some string vacuum among many which describes nature would be a great intellectual acheivement. There is no prediction in such an endevour unless EXACTLY ONE vacuum of string theory describes nature at experimentally accessible energies. If two are found, no prediction can be made. Even if exactly one realistic vacuum is found, there may be no testable prediction. I suspect that people think that they can find a distribution of vacua whose predictions have a peak around observable values. This probably won’t happen and isn’t a proper
application of probablilty anyway.
Such a program MAY be of value. But I would go along with the
NSF on this one. There is a big risk for them that nothing substantive will result. Furthermore, since string theory hasn’t been properly formulated yet, all these vacua could disappear in the future, which would really embarass whoever funds the project. If people really believe in the project, they may have to commit their
own time to it, without external funding. The NSF doesn’t have much money these days, and they have to be careful how they spend it.
I suspect that in the end, this project will be funded by sheer lobbying, and some good competing proposals will lose out.
Einstein would say that the definition of insanity is doing the same thing 10^500 times expecting a different result.
From Farrold: What is the standard response to the argument that any theory that does not predict a vast number of possible vacuua, but instead picks out a unique set of physical laws, would leave us with the problem of explaining the apparent fine-tuning of the observed set of laws to be consistent with life?
I think you mean the observed set of CONSTANTS which occur in the laws, and not the overall format of physical law. It is the set of dimensionless constants that people sometimes think presents us with a problem of fine tuning.
the distinction between the numerical parameters and the laws in which the occur is fuzzy but nevertheless useful. explaining the format of physical law would be a deeper puzzle than just trying to say why the CC is such a small positive number or why alpha is around 1/137.
However I think you are mistaken. A theory which picks out a particular set of fundamental constants would NOT present us with a fine-tuning problem. It would explain why the constants are what they are—perhaps all derivable from the value of some deeper numerical relationship. They would not be “tuned”—they would simply have to be what they have to be.
Life, an accident, would be constrained to be whatever can work within the context of those necessary predetermined constants. It’s abundance or rarity would not explain anything about the constants because they would be explained by the theory which you have imagined.
A theory already exists which makes testable predictions and which explains observed values of fundamental dimensionless constants without making reference to life.
See for example pages 167-168 of Smolin’s new book The Trouble with Physics
http://www.amazon.com/gp/bestsellers/books/14560/ref=pd_ts_b_nav/102-4540543-7840144
Babe in the Universe — calling 17 prominent string theorists insane is not justified (except for the SVML author). It would be a compliment — after all, Galileo, Copernicus et al were also considered insane. The problem with SVP is that it is proposed by a group of hypocrits and opportunists who want to suck up money that would normally go to serious hep-th research and spend it on 1) self-promotion of their old (wrong, 100% excluded) models by packaging them in a useless catalogue 2) taking control of the postdoc market.
This is insane, its not science at all
Dear string theorists,
Do you seriously want NSF grant and research money being spent in this way?
Curious
dan
http://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God
I don’t know if it’s valid physics but it seems like a pretty interesting computer science project.
ABITU – that comment was priceless :))) -r
Who remarked:
I take your point. There would, however, still be a degree of mystery similar in kind (but not in magnitude!) to the mystery in Sagan’s novel, Contact, that is, the encoding a large, pixel-based image of a circle deep (yet far too early) in the digits of pi.
I assume that you mean it explains how one could derive observable values, rather than that it explains (by deriving) the observed values.
In his 1997 book, The Life of the Cosmos, Smolin suggested that universes spawn new universes with different physical constants, and he proposed this as an answer to the fine-tuning problem. Does he now argue that the form of the laws, if understood deeply, would imply the constants?
For example, one current idea (as I understand it) describes particles as braids in structures in loop quantum gravity, which in turn emerges from structures in a spin-foam model. If the spin foam model were as parameter-free as theorists would like, then the theory would derive all the physical constants from its form. I’d be happier with a theory that had several arbitrary (i.e., tunable) parameters that enabled (in principle) precise calculation of the many arbitrary parameters in the Standard Model.
The fine-tuned universe: http://www.xkcd.com/c10.html
(continuing)
One would of course want a theory of this sort to explain how these N parameters appear through a spontaneous symmetry-breaking process which results in a multi-dimensional continuum (or fine-grained distribution) of possible sets of parameter values.
In the multi-dimensional continuum case, the theory would be consistent with an actual infinity of possible vacua. Nonetheless, taking measurement of the Standard Model parameters as the experiment, the theory would be falsifiable: It would predict that the parameter values are constrained to an N-dimensional surface (N << 29) embedded in the 29-dimensional space of unconstrained parameter values. This surface either would or wouldn’t intersect the error-box of our measurements.
I agree with many of Peter’s criticisms (and I usually root for the loop/foam team), but I think that a similar case can be made for a possible outcome of Landscape studies. One scenario would be (a) that the Landscape includes a set of vacua with 3+1 macro-scale dimensions and with physics that can be approximated by parameterizations of the Standard Model, and (b) that in every member of this set, the 29 parameter values fall on an N-dimensional surface (N << 29) embedded in the 29-dimensional space…and so on, as above.
Dear Peter O.
Many thanks for your comment.
I think that demonstrating that string theory can “in principle”
give a theory (or 2 or 1,000,000) which is consistent with the standard model and can be regarded as a “theory of everything” will be a big step forward inspite of having no prediction power.
(If a unique such a theory can offer some predictions it is quite possible that some predictions can be offered even if more than one found.)
A nice feature of this proposal is that unlike most proposal in the area, the outcomes can go both ways as far as string theory is concerned. In some (weak) sense this project is an empirical tests for the ideas of string theory.
“This probably won’t happen and isn’t a proper application of probablilty anyway.”
This is a very nice sentence. (We do get a lot of milage using probability in ways which are not entirely proper,…probably.)
“But I would go along with the NSF on this one. There is a big risk for them that nothing substantive will result. Furthermore, since string theory hasn’t been properly formulated yet, all these vacua could disappear in the future, which would really embarass whoever funds the project.”
I diasagree with you on this point. I think NSF (and also individual scientists) should take risks.
Are you sure it isn’t called the
VACUOUS STRING PROJECT?
Dear crackpot Peter, you are a damn asshole. I will sue you for the lies those crackpot commenters telling on me on your crackpot blog.
I hope you will die soon. The sooner the better.
So: be prepared to hear from my lawyer.
Best Lubos
This comment above by Lubos is beyond the pale. Here he wishes physical harm for Peter for hosting a mostly-open forum where sometimes some nasty comments are posted about Lubos (invariably in response to Lubos’ own writings), while on the other hand his own writing has a take-no-prisoners style — his own blog posts are riddled with name calling and put-downs, and he makes no attempt to moderate nasty comments about Peter by others. From my perspective Lubos has lost a a certain connection with objective reality, and it appears to me that it is getting worse with time. He seems to be unaware of the great disconnect between his complaints and his own actions, for example complaining (in the past) about censorship on Peter’s blog while practicing censorship to a much greater degree himsef (witness the on-going thread Peter started for those comments); he frequently rails against political correctness, yet people who do not behave in a politically correct way on his blog (i.e., according to his politics) typically meet with either censorship or intimidation tactics. There are other examples that I see as hypocritical behavior on his part, but no need to enumerate them.
I am no lawyer, but it seems obvious that Lubos has absolutely no chance of winning any kind of legal action against Peter. A quick read of his blog speaks volumes about how his verbally abusive tactics invite unsavory comments about him. Importantly, he has willingly made himself a public person, and his blogging style promotes the kinds of comments he deplores here. His reaction above to those kinds of comments look to me like yet another example of loss of objectivity. Presumably his lawyer will inform him that different standards for slander, etc. apply to public figures than others.
I hope Harvard University becomes aware of his comment. They need to know how a person who prominently puts their banner at the top of his blog is reflecting on them.
I should clarify one statement I made above, especially given Lubos’ current litigous mood:
… he frequently rails against political correctness, yet people who do not behave in a politically correct way on his blog (i.e., according to his politics) typically meet with either censorship or intimidation tactics.
By “intimidation tactics” I am referring to ridicule, name calling, and other forms of put-downs that often await the hapless person who openly disagrees with him. I consider these to be intimidation tactics because their end result is often that the target of the verbal abuse feels intimidated and afraid to disagree in the future. In its worst form, the person may be afraid to take a public stand on a related issue in the future, even far away from Lubos’ verbal reach, just because they are afraid of a repeat of the bad experience. This kind of intimidation-based suppression of free speech (in the sense above) is not healthy for science, nor for a democratic socieity in general.
In his blog Lubos proposed to scan all 10^500 string models to show that string theory is predictive. In the comment section of his blog I answered: let’s ask 10^500 monkeys to type 10^500 random field theories; both approaches will likely lead to something like 10^300 models (string models or monkey models) compatible with all data we have.
My comment was deleted.
I think Lubos understood the scientific value of the Monkey-theory Vacuum Project: in case DOE or Harvard are going to develop a cluster of monkeys my lawyer will start a legal action for plagiarism.
I don’t know if it’s valid physics but it seems like a pretty interesting life-science project.
String theorists promised answers to current problems and misteria in fundamental physics. After many decades, they offered not a single consistent and good solution. Then now the hype is that whereas string theory is not physics it has been good for mathematics.
If this project start (i do not wait), that would we heard in next 40 years, that string vacuum project failed but was useful for developing new markup and computational capabilities?
Juan R.
Center for CANONICAL |SCIENCE)
The post mentioned above is here. It is an interesting and forthright exposition of how at least some string theorists view the question of falsifiability as applied to string theory and its offshoots. Consider the following quote:
Ah, yes. Now, if we could just discover those more sophisticated and faster methods…
I keep getting “Cannot find server” error when I try to link to the SVP Wiki-page. Has it been removed or firewalled?
If you believe in the anthropic Landscape, then the next logical step is to statistically analyze 10^500 vacua. Indeed, analyzing 10^1000 would be better and would produce more scientifically convincing results. I am convinced it will spit out the nature of dark matter, calculating from the first principle of ST. However, it is somewhat risky to apply to NSF for funding for this vital project – it may cause the US Government debt to hit $15T. I suggest instead apply directly to CERN to divert all of its massive computing facilities to the String Vacuum project. After all, ST is far more important then LHC. The LHC can wait. I nominate Prof Motl as project lead to get the necessary funding from CERN, and scientific approval from NSF and his superior at Harvard. As the certified Clown of String, he is uniquely qualified. This is his chance for a Nobel! You know, after 25 years I still find ST so exciting ….
Pingback: Gravity
Hi Arun!
Thanks so much for the link to the short story! I read that story an eternity ago, forgot author and name, and have been trying to find it ever since.
Every once in a while, these comments are actually worth reading 😉
Best,
B.
About the svp: even though I think that the appearance of the st landscape is a serious indication that either a) we’ve misunderstood something about st or b) it’s not the TOE, the project itself I find worth an investigation. I mean, even if it’s not clear why THIS point in the landscape, wouldn’t it be good to know it, so we can work with it? We also don’t know why spacetime has Lorentzian signature, but still we can work with it. (Or if you know, tell me.)
What scares me however are the dimensions of the planned project, the potentially low level of possible knowledge gain, and what it means when I recall that we are living with finite resources, money, people. Questions to ask: Are there other, better, proposals that would suffer from a support of the above? Is the project likely to draw people away from other fields? Is it? I mean, look, those who support the svp have a research interest, so they write a proposal, whats wrong about that? Did they make any scientifically wrong claims about what knowledge gain it would yield? And, no, I wouldn’t finance the project — of course I would support my own proposals…
B.
PS: regarding the above nasty comment allegedly written by Lubos, I seriously doubt he wrote it. I have seen very similar sounding comments appearing in his comment section, but signed with Peter Woit. I found myself thinking, no way Peter would write that. However, these comments disappeared within some minutes, so I guess Lubos just deletes them.
Bee, maybe you should ask Lubos. I suggested him to state that he did not wrote that, but my comment was deleted with no answer.
Hi M, to be frank, I don’t really care. I don’t want to spend my time trying to psychoanalyze Peter’s and Lubos’ virtual realities. I found myself thinking recently it would be really interesting to lock the both of them together in a room for 48 hours, with a live webcam. –B.
And now that things are changing for the worse,
See, its a crazy world we’re living in
And I just can’t see that half of us immersed in sin
Is all we have to give these –
Futures made of virtual insanity now
Always seem to, be govern’d by this love we have
For useless, twisting, our new technology
Oh, now there is no sound – for we all live underground
~ Jamiroquai
hi Bee, according to my understanding both the two persons you mention are nice and polite in reality, but little motivated to converse with each other—-so the locked room experiment would not be what you call “very interesting”, but would, on the contrary, turn out to be quiet, and uneventful.
It is true that Lubos has a “Wildman” personality on the web. But I suppose that this is merely his web persona. Maybe you know him in real world, and can contradict me
Bee, a new version of Schrodinger’s Cat?
Who, that’s what I had in mind 🙂 You think I would want to put them in a room together on the danger that only one of them comes out again? Hey, I am a nice girl! No, I think, there’s too much energy loss in friction here that could be better used. Since neither Peter nor Lubos are stupid they probably have noticed that as well.