The public perception of string theory has definitely changed over the last few years, with the latest evidence this week’s cover story in New Scientist, which begins:
It’s the theory everyone loves to hate.
The article (available fully only to subscribers, I fear) is entitled String Theory: The Fightback, and its story line is that, because of all this criticism, after nearly 25 years of work, finally:
string theorists themselves have realised they must find ways to put their models to the test. They may still be far from being able to observe a string in a laboratory, but experiments planned for the near future – and even one currently under way – could provide tantalising evidence either for or against string theory…
Now the string community is fighting back by devising creative, if indirect, ways to look for signs of strings – from hidden dimensions to ripples in space-time and other potential signatures of a stringy universe. The time has come to put string theory to the test…
Critics should take heed. Experiments now show that string theory may be testable after all. One study at a time, string theorists seem to be homing in on models that will make specific, falsifiable predictions.
What follows is the usual misleading hype of bogus “tests” of string theory, of the sort I’ve written about extensively here. They are:
For one of my postings about this, see here. More than three years ago Polchinski and the KITP at Santa Barbara issued a press release about this trumpeting the idea that such superstrings could be observed by LIGO “over the next year or two.” The problem with this kind of “test” of string theory is that you can easily come up with string theory models that produce lots of cosmic superstrings (already falsified), no observable amount of cosmic superstrings (can’t use to test string theory), or precisely the amount of cosmic superstrings such that no hint of them would be seen until now, but there would be networks of the things lurking just below the threshold of measurability, to be discovered by the next generation of experiments (highly unlikely, pretty much pure wishful thinking). Polchinski has now stopped talking about LIGO and a year or two, and instead is promoting measurements of pulsars and five to ten years:
According to Polchinski, though, our best bet for observing gravitational waves emanating from strings is to use pulsars. A pulsar is a rapidly spinning neutron star that fires out a beam of electromagnetic radiation as it rotates, like a lighthouse. These flashing beacons act as some of the most accurate clocks in the universe, and a gravitational wave rippling between a pulsar and Earth would disturb the otherwise precise timing of the pulses arriving here. The most likely cause of such fluctuations would be black holes colliding, but waves from strings would yield a unique timing pattern that would make them stand out. “Over the next five to 10 years,” Polchinski says, “these [pulsar observations] will probe the most interesting models.”
My colleague Brian Greene is quoted about this, making the essential point about this kind of “test”:
Sure, catching sight of a cosmic string would be a boon for string theory, but is there any observation that would serve as a death knell? For many sceptics, it’s not that string theory is so hard to prove correct that puts them off, but rather that you can’t falsify it. “I’m not aware of any test that if it fails will prove string theory wrong,” says physicist Brian Greene of Columbia University in New York. “That’s a real headache. You’d like to have a situation where you have a prediction, and if it’s right the theory is right, and if it’s wrong the theory is wrong.”
The problem is that the quoted paper doesn’t actually say that. Here’s the paper’s concluding paragraph:
However, a possible discovery of tensor modes may force us to reconsider several basic assumptions of string cosmology and particle phenomenology. In particular, it may imply that the gravitino must be superheavy. Thus, investigation of gravitational waves produced during inflation may serve as a unique source of information about string theory and about the fundamental physics in general.
Given the wealth of possible string theory scenarios, I have no doubt that if an imprint of gravity waves is found in the CMB, there will be string theory models that would “predict” it. As for the idea that not seeing such gravity waves would be evidence for string theory, here’s what Glashow has to say:
Not everyone thinks these tests will be useful, however. “Not seeing something is hardly evidence for string theory,” says Nobel laureate Sheldon Glashow of Boston University, Massachusetts, an outspoken critic of string theory. He feels that such a result would mean very little. “String theorists are very wise. They can come up with a way to explain anything.” String theory is simply not testable, he says. “There are an enormous number of string theories and they describe zillions and zillions of universes, none of them observable in any way. It sounds to me like angels dancing on the head of a pin.”
“We’re still very far from being able to say, here is the exact string theory that describes QCD,” Susskind says. “But the connections that show up between nuclear physics and string theory are fascinating. Sceptics won’t consider this evidence for string theory, but nuclear physicists will use string theory and in time discover how accurately it describes these experiments.”
What Susskind is neglecting to mention here is that these are tests of whether string theory is a useful way to do calculations in an already tested theory of the strong interactions, and has nothing to do with the question of testing the idea of string theory as a unified theory of quantum gravity and particle physics.
Also in New Scientist, there’s a story about Neil Turok’s recent talk at PASCOS entitled “Is the Cold Spot in the CMB a Texture?” which links to the string theory article. Evidently there’s a patch of CMB where the temperature is anomalously low (the probability of this happening supposedly being 2 percent) and one can speculate that this may be due to a topological defect of some kind. Amusingly, the online version of the New Scientist article includes an interpolated editorial comment that someone forgot to take out before publication:
Turok presented the findings at a conference on particles Hi Anil. This threw me a bit. We say the team noticed previous work by Turok, and then before we know it Turok is the one presenting the work. When did he join them, strings and cosmology at Imperial College London last week.
Evidently there’s a patch of CMB where the temperature is anomalously low (the probability of this happening supposedly being 2 percent) and one can speculate that this may be due to a topological defect of some kind.
The article is available only to subscribers. I was wondering– will it be possible to more easily tell whether or not this cold spot is coincidence after any particular future CMB observation experiment?
“…New Scientist… interpolated editorial comment that someone forgot to take out before publication:
Turok presented the findings at a conference on particles ‘Hi Anil. This threw me a bit. We say the team noticed previous work by Turok, and then before we know it Turok is the one presenting the work. When did he join them…”
Or, this tunneled through from elsewhere in the multiverse. Bound to happen now and then. Duality with Boltzmann brains. Or something else within epsilon of unfalsifiable, in the Landscape.
“What’s the frequency, Kenneth?”
M. Disney (observational galactic astronomer) was an “alarmist/whistle-blower” to cosmology back in 2000. It seems to parallel P. Woit/”Not Even Wrong”, in that large-scale (cosmological) questions are cursed with not having the proper data. Without data, you have only theory..hence “religion”.
The Case Against Cosmology
http://arxiv.org/pdf/astro-ph/0009020
[ interesting references to particle physics & LHC..who can do actively do “experiments”. Astronomers can only passively observe ]
Is not the below a striking parallel to the ST quagmire?
The First Crisis in Cosmology Conference
http://www.ptep-online.com/index_files/2005/PP-03-03.PDF
New Scientist uses the same article ‘template’ for lots of pseudoscience where old discredited ideas are being worked on, provided that the ideas seem exciting. It did an article some years back which was pretty much the same but about ‘cold fusion’.
(The synopsis for that PR stunt was something like: ‘Everyone loves to hate cold fusion because it was overhyped a bit at first, but now people are starting to get tantalizing glimpses of real physics from it, blah, blah…’, followed by some experimental work which gave results of no statistical significance, and a lot of wishful thinking from experts. I think that issue sold out fast. New Scientist should be commended for knowing how to write copy that sells. Pity it’s totally misleading, but that’s sci fi for you: it’s popular, but it’s also scientific fiction until some solid evidence shows up.)
I’m just waiting for New Scientist to write a defense of creationism:
‘creationists themselves have realised they must find ways to put their models to the test. They may still be far from being able to observe the act of creation in a laboratory, but experiments planned for the near future – and even one currently under way – could provide tantalising evidence either for or against creation…’
😉
“String theorists are very wise. They can come up with a way to explain anything.” – Sheldon Glashow
I’m surprised to see such a sarcastic comment from a such Nobel Laureate like Sheldon Glashow being quoted. String theorists are (correct me if I’m wrong) just human beings, so presumably they have feelings and don’t like sarcasm (unless, of course, the mental ability of string theorists to believe that the world has 6/7 extra dimensions without evidence, is linked to a genetic mental condition which makes them invulnerable to ridicule). Those string theorists who believe that they reside in the universe where the anthropic principle has selected the existence of the Not Even Wrong blog to occur on the internet, might feel upset and hurt by such sarcasm. But many of the other string theorists, believing that they spread over the segment of the landscape of the multiverse where physics is fairly close to our own but not quite close enough to include this blog, might remain deluded, if they really exist.
Joe,
I’d rather people discuss mainstream cosmology on blogs run by cosmologists, but I’ll just point out that, whatever its problems, mainstream cosmology is an extremely solid and respectable science compared to “string cosmology”.
Joe, the “Case against Cosmology” is just a few months older than “String theory: An Evaluation” (physics/0102051). In these years cosmology made enormous progress, while the main development in strings has been the rediscovery of the anthropic principle.
sadly, the criticism you raise above holds for a lot of other theoretical approaches as well. I understand you are mostly concerned about string theory, but the problem is much more general than that. if I look at the arxiv, it seems it has become quite normal to cook up more or less meaningful ‘theories’ or ‘models’ that have a lot of ‘motivation’ but no connection to reality whatsoever (despite an impressive amount of calculations and references). now the fashion changes to “needs phenomenology”. being a phenomenologist, I certainly don’t mind, but in many cases there either is no phenomenology (a ‘qualitative’ prediction is no prediction, one can always shift parameters out of experimental range, so what’s the point), or there is phenomenology but no sensible model (i.e. one that is in disagreement with the standard model, or worse, one can’t be sure whether it is in agreement or disagreement with anything), or there is phenomenology and a model but one can’t learn anything from it (e.g. there must be one hundred different ‘models’ that ‘motivate’ why a breaking of Lorentz symmetry is ‘natural to expect’ – so if we measure it what do we learn from that besides that it is broken?).
Bee,
I agree that this is a more general problem, but the string theory situation still seems to be a unique one. There’s lots of dubious theoretical work out there (one reason being that coming up with a good new idea in this field is extremely hard), but most of it is pursued by smaller groups of people with lower profiles in the community. What is going on here is outrageous overhyping and misleading of the public by a rather large group of people at the absolute top of the academic hierarchy. I don’t know of anything quite analogous in other subjects.
It’s interesting to note that the people New Scientist found willing to promote bogus “tests” of string theory are mostly the same ones promoting the anthropic landscape.
Hi Peter,
but the string theory situation still seems to be a unique one
I agree. I am just saying one doesn’t cure the problem by demonising one field – there is the danger such unfortunate distractions from questions relevant to physics just shift somewhere else.
Best,
B.
I just came accross a quote by chilean writer and filmaker Alejandro Jodorowsky (el Topo, Sacred Mountain, ‘El Incal’ comics (with Moebius)) that beautifully resumes the situation with string theory, the landscape and what it is doing to fundamental physics:
” Failed Theory:
A Philosopher who couldn’t walk because he stepped on his beard, cut his feet”
It is interesting to notice the string-like nature of long beards…
Coin asked:
The article is available only to subscribers. I was wondering– will it be possible to more easily tell whether or not this cold spot is coincidence after any particular future CMB observation experiment?
This is already the second confirming indicator of the reality of the anomalies, and it’s not just a coincidence when the motion of the Earth around the Sun, (the ecliptic), also traces-out the goldilocks zone of the observed universe.
So, give em another 20 or 30 years to explain it away without giving equal time to the most apparent indication of the evidence, since that’s the way science is done now a-days!
http://arxiv.org/PS_cache/arxiv/pdf/0704/0704.0908v1.pdf
Extragalactic Radio Sources and the WMAP Cold spot
Adds what?… a greater need for more time and excuses not to introduce the smoking gun into evidence?
Coin,
The rest of the article says that Turok presented calculations about what the next generation Planck experiment should see in the CMP if these topological defects really do exist.
All,
Please take general attacks on and discussion of the conventional cosmological model elsewhere. I’m not an expert, not very interested, and not willing to moderate or host a discussion of this topic.
Hi,
Perhaps my question is a bit out of topic, but I am wondering what “quantum gravity” means in the context of string theory. What is being quantized? How one gets the semiclassical limit? What is the conceptual relation between quantized microscopic degrees of freedom and the semiclassical description in terms of gravitational field? Does string theory incorporate equivalence principle? Is it any direct way to get any prediction concerning, say, orbits of planets in Solar System and string corrections.
All these must be a textbook stuff, but I could not find any clear discussion.
Thanks
J
J.
Unfortunately this is pretty much off-topic, and full answers to the questions you ask have long answers. You might do better to try this question on the blog of a string theorist like Jacques Distler or Clifford Johnson. These questions are addressed in the standard string theory textbooks, although the answers are confusing, largely because while it is clear what string theory is supposed to be for strings propagating on a fixed background, string theorists want to use this to deal with varying backgrounds.
Pingback: Bogus tests « Bob Dudesky
New Scientist‘s article about string “theory” proved particularly disappointing. The weasel words came thick and fast — “may allow for a test of string theory” and “could lead to verification of the theory.” Same kind of deceptive wording you find in a prospectus for a dodgy hedge fund.
Why is it so difficult for reporters to simply write “string theory has not yet made a single falsifiable prediction”?
Mclaren, reporters must write what sells the paper or their editors won’t publish it.
Journalism, even science journalism, is a little bit different from the scientific ideal.
There’s also a sense of groupthink involved. New Scientist is not the only journal which reports fashion rather than real news.
At the end of the day, journals are there to put food on the tables of the journalists and editors who produce them. If they reported news that doesn’t excite the crowds into buying it, they would lose readers, prestige, advertisers, distribution, prizes awards for popular science journalism, massive salaries, etc. (Plus, they would risk being strung up by all those who believe string is sacred writ.)
(A part of) My paper “GR-friendly description of quantum systems”
(IJTP, DOI 10.1007/s10773-007-9474-3, http://www.springerlink.com/content/w3175m02836610t4/?p=e2f0a2261a4248e69cdb5eb78668af77&pi=2 )
may be of interest to members of this community.
It is also available at http://members.cox.net/vtrifonov/ (PDF).
Best regards,
Vladimir
With regard to string theory, Dr. Woit remarked:
What is going on here is outrageous overhyping and misleading of the public by a rather large group of people at the absolute top of the academic hierarchy. I don’t know of anything quite analogous in other subjects.
Eh?
You don’t?
I certainly do. Indeed, the examples have grown so flagrant it’s hard to overlook ’em. “Hard” AI has been guilty of “outrageous overhyping and misleading of the public by a rather large group of people at the absolute top of the academic hierarchy” for at least 50 years. In 1967 tenured the Princeton computer science “genius” David Gelernter confidently predicted “Within 10 years, the best mathematician in the world will be a computer.” In 1974, Marvin Minsky, head of the MIT AI Lab, predicted “human-level” intelligence from computer programs by the early 21st century,. and superhumanly intelligent AIs shortly thereafter. In 1984, Douglas Lenat (leader of the CYC project) predicted that by 1994 CYC would be going online and reading online text independently in order to increase its knowledge base.
None of these wildly overhyped predictions have come close to reality. Meanwhile, all the most respected AI researchers now admit AI has hit a “dead end” and call AI “brain dead” and “a degenerating research program.”
For an excellent overview of the pervasive failure of “hard” AI, see:
http://www.skeptic.com/the_magazine/featured_articles/v12n02_AI_gone_awry.html
In his 1959 article “There’s Plenty Of Room at the Bottom,” Richard Feynman inaugurated the idea of nanotechnology, a subject expanded to science-fictional lengths in K. Eric Drexler’s 1987 magnum opus Engines of Creation. Despite 20 years of incessant hype, Drexler has produced not one single scientific advance which would bring his fantasies of molecule-sized computers and “assemblers” closer to reality. In fact, Drexler hasn’t even done any basic research — instead, he’s spent all his time doing PR.
Undeterred by these unpleasant realities, computer science professor Vernor Vinge has confidently predicted “The Singularity,” an alleged explosion of technology in which “hard” AI combined with Drexlerian nanotechnology and genetically engineered supermen to allow science fictional scenarios like mind uploading into computer, immortality, people with IQs of 20,000, nanomachines capable to tearing apart any object and rebuilding it atom by atom into any desired form.
Despite all this wild hype (including truly crazed predictions by spinsmeisters like Ray Kurzweil and Hans Moravec that “hard” AI is not only possible, but imminent, and that most of the people in his audience will live long enough to upload their minds into superhumanly intelligent computers), signs of actual progress in genetically engineering supermen, or producing human-level intelligent AI programs, or building Drexlerian nanomachines, have proven impossible to discern.
This does not seem to have daunted any of the spinmeisters, who continue to trumpet the alleged imminence of The Singularity “any day now” even while researchers in computer science and materials science and molecular biology have run into roadblock after roadblock and brick wall after brick wall. Indeed, the more research molecular biologists do into the genetic code, the more their results challenge previouly accepted models of how DNA works:
http://www.bioresearchonline.com/content/news/article.asp?docid=e87b8231-8a15-43bc-8ba4-bb7b65b028ee
Rather than being on the brink of genetically engineering mythical supermen with IQs of 20,000, it seems more likely that researchers are now finding out that almost everything we thought we knew about the genetic code is wrong, and we must go back to square one.
Jaron Lanier has written about the outrageous overhyping and misleading being done by the bizarre quasi-religious “rapture of the nerds”-style cult that has developed around these fantasies of “hard” AI that’s supposedly “right around the corner” and purportedly imminent genetic engineering to enhance human traits like intelligence and allegedly soon-to-be-built nanomachines able to revive the dead and literally turn water into wine:
http://www.edge.org/3rd_culture/lanier/lanier_index.html
Lanier calls this crackpot cult “cybernetic totalism,” and he points out that it has taken over surprisingly large parts of previously respectable computer science and materials science and molecular biology.
I’m seeing a direct parallel here between the increasingly wild hype and ever more bizarrely hubristic predictions (“I expect the Riemann Conjecture will be proved within several years as a baby example of string theory”) of string theory and the increasingly wild hype (mind uploading! immortality! superhumanly smart AIs! genetically engineered uebermenschen!) and ever more bizarrely hubristic predictions (people in this audience will live long enough to live forever! — Ray Kurzweil) touted by the crackpot cult of the Singularitarians.
For that matter, I see a direct parallel twixt the radical disconnect from reality we observe in American foreign policy (“We have to fight them over there so we don’t have to fight them over here”), and these bizarre cults of string theory and “hard” AI and The Singularity in the sciences which have similarly cut themselves off from reality and now float into a fantasy realm of multiverses and branes, superhumanly smart AIs and nanoengineered diamond-based power plants the size of a matchbox allegedly able to power a city.
It’s as though — in a wide variety of different areas of American life — otherwise sensible rational professionals (computer science professors (Vernor Vinge’s and Hans Moravec’s delusion of The Singularity), theoretical HEP physicists (string theory), economists (The Laffer Curve), foreign policy analysts (Project for the New American Century), materials scientists (Drexlerian nanotechnology), molecular biologists (genetic engineering to enhance traits like intelligence which have never even been successfully defined or definitively measured)) have drifted off into la-la land and now live in a fantasy world of their own imagining, immune to inconvenient facts and inappropriate logic.
Far from being an isolated example, string theory seems to me paradigmatic of a much wider dysfunction and pathology in American intellectual life.
`Meanwhile, all the most respected AI researchers now admit AI has hit a “dead end” and call AI “brain dead” and “a degenerating research program.”’ This is a difficult case because the scope of the field referred to by AI tends to be understood differently by different people: does, e.g., computer vision research fall within its purview? I think the hardcore AI people claim it as a branch of AI – partly so they can point to some of the deployed examples – whilst most people working in vision don’t consider it AI. So there are lots of vigorous and, I think, non-degenerate research programs in areas that some might claim as AI. In addition, lots of computer research does get evaluated on (semi-) objective problems and is generally improving. Most is still a long way from being reliable and safe enough for real world deployment, but there is meaningful, measured progress.
The key differences from Peter’s view of string theory is that AI people don’t claim anything but a hard AI approaches to problems is not going to work, and there are lots of people working at problems hard AI “thinks it owns” using different techniques, eg, simple formula learning in data mining problems. (I’ve emphasised what I think is Peter’s view just because I don’t have the knowledge to know if he’s right.)
Dave Tweed makes some excellent points. However, the boundary lines twixt cosmology and HEP are fairly blurry too, so if we use your criteria a good argument could be made that “lots of people” in cosmology are actually working on string-related physics.
The problem with playing these kinds of word games is that they obscure everyday commonsense realities. Namely, sensible people know there’s a whopping big difference between “hard” AI and machine vision, just as sensible pople know there’s a whopping big difference between string theory and cosmology.
For one thing, cosmologists talk about things that can be measured. And cosmologists construct models that can be falsified.
Moreover, few string “theorists” have claimed to my knowledge that it’s the only approach that can work – instead, they’re claiming it’s by far the most promising approach. “The only game in town” doesn’t mean the only possible approach, just by far the best one in the judgment of the string supporters.
LQG research continues, just as artificial vision research persists in computer science indepddent of GOFAI. So Dave Tweed’s remarks on that score constitute a straw man.
What Dave Tweed’s verbal calisthentics tend to obscure, I think, is the hard cold fact that for both “hard” AI and string theory, it’s clear to any reasonable person that these approaches have broken down. They’re just not working. There are no practical results in the real world, and no sign of any path that would lead to practical results in the real world.
This seems true of other fields as well. Merton & Scholes’ 1997 Nobel Prize followed by the bankruptcy of Long Term Capital Management, a hedge fund that used their economic theories to allegedly eliminate risk in investing but went broke instead, offers a prime example. By any definition Merton and Scholes qualify as the “absolute top of the academic hierarchy” in economics by reason of their Nobel Prize. The disastrous collapse of LTCM under the guidance of their economic theories surely fits the definition of “outrageous overhyping and misleading of the public.”
Merton and Scholes represent only the tip of the iceberg. We’ve been seeing an awful lot of “outrageous overhyping and misleading of the public by a rather large group of people at the absolute top of the academic hierarchy” over the last 15 to 20 years, and in academic fields far outside physics. Dave Tweed’s verbal gymnastics about the exact definition of “hard” AI tends to obscure this valid and significant point.
«This seems true of other fields as well. Merton & Scholes’ 1997 Nobel Prize followed by the bankruptcy of Long Term Capital Management,»
————–
Maybe I’m just nitpicking but Merton & Scholes didn’t win a Nobel prize in Economics, they couldn’t since that prize doesn’t exist.
They won the 1997 Swedish Bank prize in Economics in memory of Alfred Nobel. Regardless of that, it’s considered the pinnacle of an economist’s career. Nice info I didn’t know.
You’re quite right about the Swedish prize for economics — Alfred Nobel never stipulated an award for economics. Thanks for the correction.
Er … excuse me, but Merton and Scholes were most definitely not responsible for the LTCM disaster, directly or indirectly … they were merely roped in to provide some kind of academic imprimatur to a venture that was always going to be risky. They had virtually no part on the day-to-day decision making, which was done by Meriwether and a handful of maths PhDs he brought with him from Salomon Brothers. The Nobel-equivalent prize they won was for developing an option pricing formula based on the notion that stock prices undergo a random walk — demonstrably not what happens but their formula nonetheless proved useful as a kind of reference point.
In Mclaren’s original comment, the “hard” modifier to AI wasn’t consistently there so it wasn’t clear whether you were claiming “hard AI” or “AI” was dead. I don’t like the name AI because it’s not agreed whether it’s merely generating results which humans generate by intelligence, or the computer should be doing this “in an intelligent way”. However, there are lots of institutes around the world that have “AI” in their names that have people working on both “hard AI” and other problems so we’re stuck with it. I was more pointing out to non-computer science readers that the situation isn’t quite as black-and-white as you suggested, and certainly your viewpoint is not universally held.
Mclaren wrote “I think, is the hard cold fact that for both “hard” AI and string theory, it’s clear to any reasonable person that these approaches have broken down.” This may well be true. However, my understanding of the criticism of string theory is that string theorist’s totally dominate all theoretical particle physics and quantum gravity research. This isn’t the case for “areas of research that some people could be call AI”: lots of people doing non-“hard AI”-approaches are working on some of these problems. It’s certainly not more difficult to get an academic job working in these areas without being a “hard AI” person, something claimed to be difficult in theoretical physics for non-string theorists.
Basically, I understand Peter’s objections to ST be based both on technical problems in their program and their excessive sociological power and abuses of it. The first may be true of “hard AI” but the second definitely isn’t because in academic circles they don’t have much power to abuse. Of course the general media is a different thing because talking about immortality or the singularity is going to get journalists’ attention much more than talking about simple model learning for data mining against credit card fraud prevention, etc. I think this says more about journalists than it does about academic AI though.
Dave, this does not bear on your argument concerning Artificial Intelligence, but is merely a point of information.
From the statements of European non-string quantum gravitists, and my own observation of current research, I gather that the string dominance and abuse of sociological power which you mention is less of a problem outside the US.
A German PhD student now in the UK recently referred to the situation in the US as “extreme”. I believe string virtual monopoly is primarily a millstone around the neck of US theory research, and only secondarily worldwide. In his expressed view, funding for non-string QG study and research is satisfactory in Europe and the UK.
What we are seeing is a kind of “brain drain”, as new PhDs in quantum gravity migrate out of the US. A 2006 Penn State PhD won a European (Curie) fellowship and went to postdoc in the UK. Another recent Penn State PhD went to Canada. Penn State has the only non-string QG group in the US (more than one faculty) and is the only place in the country regularly turning out non-string QG physicists.
String has such a stranglehold on theory research support that these young people have limited options in the US. No matter how brilliant the trackrecord they have to leave the US to continue in their chosen line of research. But these are precisely the people one would want to keep if one has any hope of diversifying research in US departments!
The ESF (European Science Foundation) started a special QG arm in 2006 which is already well-funded and active.
http://www.maths.nottingham.ac.uk/qg/
US postdocs are currently working in Utrecht, Marseille, several places in Canada, and the UK.
So when one speaks of the crippling of theory research due to abuse of excess power, one should probably qualify that by saying that it is progress in the US that is retarded—and not necessarily elsewhere.
Er…excuse me, Chris Oakley, but your claim is not supported by the available evidence:
Scholes’s and Merton’s involvement in LTCM has led commentators to ask whether the company’s failure reveals basic errors in the assumptions underpinning their work with Black on option pricing. Dunbar, for example, says that these assumptions are ‘flawed’ – a common conclusion in discussions of LTCM. If Dunbar and the others are correct it would be a matter of real import, given how central these techniques have become to the global financial system. But the extent of the technical dependence of LTCM’s trading on Black-Scholes-Merton reasoning is unclear until there is more information in the public domain.
http://www.lrb.co.uk/v22/n08/mack01_.html
Merton himself was interviewed on camera in the PBS Frontline special “The Trillion Dollar Bet.” He himself states on camera that LTCM consittuted a test of his options pricing theory method of eliminating risk from investments. He went on to waffle and hedge that it was unclear whether the theory was wrong, or whether the boundary conditions were violated and thus it wasn’t a valid test.
The assertion that Merton & Scholes had nothing to do with LTCM’s collapse is the standard rewriting of history we find with all these kinds of huge failures nowadays. Ronald Reagan wasn’t a senile crackpot who hired an astrologer to time his White House meetings, he was The Man Who Won the Cold War. Merton and Scholes weren’t actually invovled with LTCM, they were jsut there for publicity purposes as window dressing. Ten years or so down the road we’ll start hearing that the Iraq war wasn’t a giant debacle but a huge success and papers will start appearing showing graphs to prove how marvellous the Iraq mess has been for the Palestianian peace process,, how much it has improved the Middle East, and so on. Standard stuff.
Merton was responsible not only for inventing the Black-Scholes formula, but also for founding LTCM
http://www.portfolio.com/views/blogs/market-movers/2007/05/22/bob-merton-sees-no-more-market-panics
Black, Scholes and Merton’s solution to the problem was far-reaching, but their basic idea was simple and elegant. They showed how to construct a ‘replicating portfolio’: a continuously adjusted set of investments in both the underlying asset and government bonds or cash that would have exactly the same pattern of returns as an option. In an efficient market, the price of the option has to equal the cost of the replicating portfolio. If those prices diverge, there is risk-free profit to be made…
http://www.lrb.co.uk/v22/n08/mack01_.html
That’s the explicit model on which LTCM claimed to have based their investments. Draw your own conclusions.
mclaren,
This discussion of the problems of LTCM is both completely off-topic (and, from the little I know based on reading a book on the subject and talking to people in that industry, pretty far off-base). I’d really like this blog to stick to topics in math and physics I’m competent to moderate, the kind of discussion you want to engage in belongs on a very different blog.
Can’t argue with that – but McLaren, you’ll have to give me an e-mail address if you want to continue the discussion. Your comments, BTW, suggest that you have never worked in the business. I have.
Physicists believe that if their theory disagrees with experiment they can just say it hasn’t been possible to test it yet. Yet string theory has made a testable predication: the dimension. And that is wildly wrong. Moreover it has long been known that physics would be impossible in any dimension but 3+1 (which it totally uninteresting since it agrees with reality). For proof see my OAIU book in the booklist in my blog, which also has discussion of many other things wrong with string theory. Science blog
impunv.wordpress.com
or
impunv.blogspot.com
Political blog
randomabsurdities.wordpress.com
The politics should be ignored except by those who like nasty remarks about George Bush.