The Amplituhedron and Twistors

Yesterday Nima Arkani-Hamed was here at Columbia, giving a theory seminar on the topic of the Amplituhedron, which is a characterization of the integration region in a calculation of scattering amplitudes by integrating over regions in the so-called positive Grassmannian. This is a modest advance in mathematical physics, one that for some reason a few months ago garnered a lot of hype (see here for more about this).

As seems to often be the case, the Arkani-Hamed talk was a bit bizarre as an event. Scheduled to start at noon, people soon settled in with the sandwiches provided by this seminar, and he started talking about 12:15. About an hour and a half into the talk, people were reminded that he doesn’t mind if they leave while he’s speaking. Two hours into the talk, soon after he said he was only a quarter of the way through his material, I had to leave in order to do some other things. I don’t know how long the talk actually went on. It’s too bad I didn’t get a chance to stay until the end, since he promised to then explain what the current state of progress on these calculations is.

What is being calculated are scattering amplitudes in a conformally invariant theory, with the simplest example the planar limit of tree-level amplitudes of N=4 super Yang-Mills. One wants to extend these methods to loops, to higher order terms in 1/(number of colors), and to non-conformally invariant theories like ordinary Yang-Mills (at the tree level, ordinary and N=4 super YM give the same results).

As usual, Arkani-Hamed was a clear and very engaging speaker. Also as usual though, it’s unclear why he thinks it’s a good idea to not bother trying to fit his talk into a conventional length, but just keep talking. One reason for the length was the extensive motivation section at the beginning, which had basically no connection at all to the topic of the talk. There was a lot about quantum gravity of an extremely vague sort. In a recent talk I wrote about here, he explains one reason why he does this, that he’s describing the motivation he needs to keep doing this kind of mathematical physics. I suspect another related reason is that this kind of vague argument about quantum gravity and getting rid of space-time is all the rage, so if you’re not working on the firewall paradox, you have to justify that somehow.

Once he got beyond the motivational stuff (and a complaint about BRST: “almost anytime you hear BRST, there is something formal and complicated going on”) the talk was worthwhile and I learned a fair amount. The main thing that struck me was just how much the whole story has to do with Penrose’s twistor program. Penrose developed twistors also with a quantum gravity motivation: they provide a very different set of basic variables, with usual space-time points not the fundamental objects. Of course I was aware of some of the twistor part of the amplitudes story (see for instance here), but I was unaware of the important role played by Andrew Hodges, of Penrose’s twistor group at Oxford, in these recent developments. Hodges, besides writing a fantastic biography of Alan Turing, has worked on twistor theory for about forty years, and some of his innovations have been crucial for the recent advances on gauge theory amplitudes. One example is his “twistor diagrams”, and for more about this and how other work of his has contributed to the emerging story, see his up-to-date Twistor Diagrams website. Hodges is a wonderful story of someone who didn’t follow fashion, but stuck to pursuing something truly worthwhile, it’s great that he has now been getting attention for this, as his work has become useful for more popular research programs.

For those who keep asking about interesting, promising ideas about fundamental physics to work on, twistors are something they definitely should look into. The recent amplitudes work is one specific application of thinking in twistor variables, but the whole question of how to do quantum field theory in twistor space seems to me to still be wide open. Twistor theory involves some wonderfully different ways of thinking about four-dimensional geometry, and these seem far more likely to play some role in future advances in the direction of unification than any of the tired ones (GUTs, SUSY, string theory) that have dominated the field for so long.

Posted in Uncategorized | 5 Comments

Topcites 2013

The people at SLAC for a long time have been compiling “Topcites” data which includes various lists of the most heavily-cited papers in HEP. From 1997-2003 Mike Peskin each year would write something about the significance of the lists. I first wrote a blog post about one of these lists nearly 10 years ago, about the 2003 list (see here). The 2013 list has just appeared, with a blog entry here, and the lists available here.

I haven’t written much about these lists for the past few years, since there didn’t seem to be much new to say. By the 2005 list it was becoming clear that there was so little new happening in hep-th that the list was dominated by pre-2000 papers, specifically the early AdS/CFT papers, as well as papers about speculative large extra dimension scenarios. This pattern has continued to this day. If you think citations mean something, this data shows a collapse of HEP theory having taken place sometime around mid-1999. The last two theory papers that appear in the overall heavily-cited paper list are a Randall-Sundrum paper from June 1999 and Seiberg-Witten’s String theory and non-commutative geometry (which I suspect makes the cut because “non-commutative geometry” is in the title, so this gets referenced by lots people doing something with non-commutative geometry, even if it has little to do with this paper).

The dominance of this list and of hep-th by AdS/CFT papers is hard to exaggerate, with Maldacena’s paper long ago leaving behind every other theory paper ever written, on track to hit 10,000 citations sometime later this year. Just as “string theory” has become ill-defined, now “AdS/CFT” is starting to become ill-defined, with these 10,000 papers covering a huge variety of different things. In addition there’s a great deal of hype and ideology surrounding this subject, with an ex-Harvard faculty member and now world’s most prominent string theory blogger yesterday calling for the murder of anyone caught “talking about the AdS/CFT correspondence’s not being dependent on string/M-theory”. More positively, Matt Strassler has been writing a very long series of blog posts that appear to be aimed at sooner or later getting to AdS/CFT. He’s at number 8, but I fear at least a hundred or so would be needed to cover the subject. By the way, people who like carrying on a tedious rear-guard action in the string wars by arguing about string theory and AdS/CFT should do it at another blog.

To get more fine-grained information about what recent work is getting cited, see listings here by arXiv category. The listing here of papers cited by hep-th papers during 2013 is dominated by the old AdS/CFT papers, but more recent things that occur in the top 10 are ABJM from 2008 (3d version of AdS/CFT), a 2006 paper on entanglement entropy from AdS/CFT (now a hot topic), and a review paper on AdS/CMT.

Posted in Uncategorized | 5 Comments

Not Giving Up

One question I’ve been wondering about for the last 20 years or so has been what SUSY proponents would do when the LHC finally gathered data and found no SUSY. Would they finally admit this was an idea that hadn’t worked out, or would they never give up, no matter what the data said? The answer is now in. John Ellis was a co-organizer of a Royal Society conference earlier this week, and a report from the conference has the following:

“I think that the physics case for supersymmetry has, if anything, improved with the LHC’s first run, in the sense that, for example, supersymmetry predicted that the Higgs [boson particle] should weigh less than 130 gigaelectronvolts, and it does,” Ellis said.

“Of course, we haven’t seen any direct signs of supersymmetric particles, which is disappointing, but it’s not tragic,” Ellis added. “The LHC will shortly almost double its energy — we’re expecting eventually to get maybe a thousand times more collisions than have been recorded so far. So we should wait and see what happens at least with the next run of the LHC.”

And if the LHC’s next run does fail to reveal any sparticles, there is still no reason to give up on looking for them, he said. In that case, new colliders with even higher energies should be built, for collisions at energies as high as 100 TeV.

“I’m not giving up on supersymmetry,” Ellis told LiveScience. “Individual physicists have to make their own choices, but I am not giving up.”

So, Ellis has made his position clear: no giving up, no matter what the LHC data from the next run says.

Posted in Uncategorized | 36 Comments

Our Mathematical Universe

Max Tegmark has a new book out, entitled Our Mathematical Universe, which is getting a lot of attention. I’ve written a review of the book for the Wall Street Journal, which is now available (although now behind a paywall, if not a subscriber, you can try here). There’s also an old blog posting here about the same ideas.

Tegmark’s career is a rather unusual story, mixing reputable science with an increasingly strong taste for grandiose nonsense. In this book he indulges his inner crank, describing in detail an uttery empty vision of the “ultimate nature of reality.” What’s perhaps most remarkable about the book is the respectful reception it seems to be getting, see reviews here, here, here and here. The Financial Times review credits Tegmark as the “academic celebrity” behind the turn of physics to the multiverse:

As recently as the 1990s, most scientists regarded the idea of multiple universes as wild speculation too far out on the fringe to be worth serious discussion. Indeed, in 1998, Max Tegmark, then an up-and-coming young cosmologist at Princeton, received an email from a senior colleague warning him off multiverse research: “Your crackpot papers are not helping you,” it said.

Needless to say, Tegmark persisted in exploring the multiverse as a window on “the ultimate nature of reality”, while making sure also to work on subjects in mainstream cosmology as camouflage for his real enthusiasm. Today multiple universes are scientifically respectable, thanks to the work of Tegmark as much as anyone. Now a physics professor at Massachusetts Institute of Technology, he presents his multiverse work to the public in Our Mathematical Universe.

The New Scientist is the comparative voice of reason, with the review there noting that “there does seem to be something a little questionable with this vast multiplication of multiverses”.

The book explains Tegmark’s categorization of multiverse scenarios in terms of “Level”, with Level I just lots of unobservable extensions of what we see, with the same physics, an uncontroversial notion. Level III is the “many-worlds” interpretation of quantum mechanics, which again sticks to our known laws of physics. Level II is where conventional notions of science get left behind, with different physics in other unobservable parts of the universe. This is what has become quite popular the past dozen years, as an excuse for the failure of string theory unification, and it’s what I rant about all too often here.

Tegmark’s innovation is to postulate a new, even more extravagant, “Level IV” multiverse. With the string landscape, you explain any observed physical law as a random solution of the equations of M-theory (whatever they might be…). Tegmark’s idea is to take the same non-explanation explanation, and apply it to explain the equations of M-theory. According to him, all mathematical structures exist, and the equations of M-theory or whatever else governs Level II are just some random mathematical structure, complicated enough to provide something for us to live in. Yes, this really is as spectacularly empty an idea as it seems. Tegmark likes to claim that it has the virtue of no free parameters.

In any multiverse-promoting book, one should look for the part where the author explains what their scenario implies about physics. At Level II, Susskind’s book The Cosmic Landscape could come up with only one bit of information in terms of predictions (the sign of the spatial curvature), and Steve Hsu soon argued that even that one bit isn’t there.

There’s only small part of Tegmark’s book that deals with the testability issue, the end of Chapter 12. His summary of Chapter 12 claims that he has shown:

The Mathematical Universe Hypothesis is in principle testable and falsifiable.

His claim about falsifiability seems to be based on last page of the chapter, about “The Mathematical Regularity Prediction” which is that:

physics research will uncover further mathematical regularities in nature.

This is a prediction not of the Level IV multiverse, but a “prediction” of the idea that our physical laws are based on mathematics. I suppose it’s conceivable that the LHC will discover that at scales above 1 TeV, the only way to understand what we find is not through laws described by mathematics, but, say, by the emotional states of the experimenters. In any case, this isn’t a prediction of Level IV.

On page 354 there is a paragraph explaining not a Level IV prediction, but the possibility of a Level IV prediction. The idea seems to be that if your Level II theory turns out to have the right properties, you might be able to claim that what you see is not just fine-tuned in the parameters of the Level II theory, but also fine-tuned in the space of all mathematical structures. I think an accurate way of characterizing this is that Tegmark is assuming something that has no reason to be true, then invoking something nonsensical (a measure on the space of all mathematical structures). He ends the argument and the paragraph though with:

In other words, while we currently lack direct observational support for the Level IV multiverse, it’s possible that we may get some in the future.

This is pretty much absurd, but in any case, note the standard linguistic trick here: what we’re missing is only “direct” observational support, implying that there’s plenty of “indirect” observational support for the Level IV multiverse.

The interesting question is why anyone would possibly take this seriously. Tegmark first came up with this in 1997, putting on the arXiv this preprint. In this interview, Tegmark explains how three journals rejected the paper, but with John Wheeler’s intervention he managed to get it published in a fourth (Annals of Physics, just before the period it published the (in)famous Bogdanov paper). He also explains that he was careful to do this just after he got a new postdoc (at the IAS), figuring that by the time he had to apply for another job, it would not be in prominent position on his CV.

One answer to the question is Tegmark’s talent as an impresario of physics and devotion to making a splash. Before publishing his first paper, he changed his name from Shapiro to Tegmark (his mother’s name), figuring that there were too many Shapiros in physics for him to get attention with that name, whereas “Tegmark” was much more unusual. In his book he describes his method for posting preprints on the arXiv, before he has finished writing them, with the timing set to get pole position on the day’s listing. Unfortunately there’s very little in the book about his biggest success in this area, getting the Templeton Foundation to give him and Anthony Aguirre nearly $9 million for a “Foundational Questions Institute” (FQXi). Having cash to distribute on this scale has something to do with why Tegmark’s multiverse ideas have gotten so much attention, and why some physicists are respectfully reviewing the book.

A very odd aspect of this whole story is that while Tegmark’s big claim is that Math=Physics, he seems to have little actual interest in mathematics and what it really is as an intellectual subject. There are no mathematicians among those thanked in the acknowledgements, and while “mathematical structures” are invoked in the book as the basis of everything, there’s little to no discussion of the mathematical structures that modern mathematicians find interesting (although the idea of “symmetries” gets a mention). A figure on page 320 gives a graph of mathematical structures which a commenter on mathoverflow calls “truly bizarre” (see here). Perhaps the explanation of all this is somehow Freudian, since Tegmark’s father is the mathematician Harold Shapiro.

The book ends with a plea for scientists to get organized to fight things like

fringe religious groups concerned that questioning their pseudo-scientific claims would erode their power.

and his proposal is that

To teach people what a scientific concept is and how a scientific lifestyle will improve their lives, we need to go about it scientifically: we need new science-advocacy organizations that use all the same scientific marketing and fund-raising tools as the anti-scientific coalition employ. We’ll need to use many of the tools that make scientists cringe, from ads and lobbying to focus groups that identify the most effective sound bites.

There’s an obvious problem here, since Tegmark’s idea of “what a scientific concept is” appears to be rather different than the one I think most scientists have, but he’s going to be the one leading the media campaign. As for the “scientific lifestyle”, this may be unfair, but while I was reading this section of the book my twitter feed was full of pictures from an FQXi-sponsored conference discussing Boltzmann brains and the like on a private resort beach on an island off Puerto Rico. Is that the “scientific lifestyle” Tegmark is referring to? Who really is the fringe group making pseudo-scientific claims here?

Multiverse mania goes way back, with Barrow and Tipler writing The Anthropic Cosmological Principle nearly 30 years ago. The string theory landscape has led to an explosion of promotional multiverse books over the past decade, for instance

  • Parallel Worlds, Kaku 2004
  • The cosmic landscape, Susskind, 2005
  • Many worlds in one, Vilenkin, 2006
  • The Goldilocks enigma, Davies, 2006
  • In search of the Multiverse, Gribbin, 2009
  • From eternity to here, Carroll, 2010
  • The grand design, Hawking, 2010
  • The hidden reality, Greene, 2011
  • Edge of the universe, Halpern, 2012

Watching these come out, I’ve always wondered: where do they go from here? Tegmark is one sort of answer to that. Later this month, Columbia University Press will publish Worlds Without End: The Many Lives of the Multiverse, which at least is written by someone with the proper training for this (a theologian, Mary-Jane Rubenstein).

I’m still though left without an answer to the question of why the scientific community tolerates if not encourages all this. Why does Nature review this kind of thing favorably? Why does this book come with a blurb from Edward Witten? I’m mystified. One ray of hope is philosopher Massimo Pigliucci, whose blog entry about this is Mathematical Universe? I Ain’t Convinced.

For more from Tegmark, see this excerpt at Scientific American, an excerpt at Discover, and this video, this article and interview at Nautilus. There’s also this at Huffington Post, and a Facebook page.

After the Level IV multiverse, it’s hard to see where Tegmark can go next. Maybe the answer is his very new Consciousness as a State of Matter, discussed here. Taking a quick look at it, the math looks quite straightforward, his claims it has something to do with consciousness much less so. Based on my time spent with “Our Mathematical Universe”, I’ll leave this to others to look into…

Update: Scott Aaronson has a short comment here.

Posted in Book Reviews, Multiverse Mania | 125 Comments

What Scientific Idea is Ready for Retirement?

Every year John Brockman’s Edge web-site hosts responses to a different question. This year the question was What scientific idea is ready for retirement?. It shouldn’t be too hard to guess what I chose to write about, with results available here.

Every year Brockman manages to attract more responses, so this is now getting to be a statistically significant sampling drawn from the population of people who write about science for the general public. Before trying to divine some general trends among the physics responses, I’ll first mention a few of them that stand out as unusual.

First, there’s one from Paul Steinhardt that I very much agree with. He’s had it with the multiverse and thinks it needs to go. I’m very glad to see someone else making many of the points that I endless repeat on this blog in a tiresome way. So, go read what he has to say, which ends with this challenge to the theoretical physics community:

I think a priority for theorists today is to determine if inflation and string theory can be saved from devolving into a Theory of Anything and, if not, seek new ideas to replace them. Because an unfalsifiable Theory of Anything creates unfair competition for real scientific theories, leaders in the field can play an important role by speaking out—making it clear that Anything is not acceptable—to encourage talented young scientists to rise up and meet the challenge.

It would be great to see someone other than him and David Gross start publicly speaking out.

A second outlier is Gordon Kane, who uses this as an opportunity to claim that he had predicted the Higgs mass using string theory. I don’t know of anyone other than him who takes this seriously. He doesn’t mention his other string theory based predictions, which include the prediction that the LHC should already have seen gluinos.

Another odd one is from Max Tegmark, who argues that we have to get rid of equations in physics that aren’t just based on finite and discrete quantities. The only positive argument I can see from him for this is that it would help get rid of the “measure problem” of the multiverse, but listening to Steinhardt and dumping the multiverse itself seems to me a much better idea. Tegmark has a new book out, I’ll write more about this here in a few days.

Maria Spiropulu is with me on the need to retire naturalness, also wants space-time to go. Getting rid of space-time has multiple proponents, including also Steve Giddings and Carlo Rovelli.

Another theme is people starting to sound like John Horgan, announcing we’re reaching the limits of science. Martin Rees thinks that some scientific problems may never yield to our understanding: “The human intellect may hit the buffers”. Ed Regis thinks the cost of a next generation collider is just not worth it for what it is likely to tell us.

A variant of this is the argument that we’ve reached the end of the road for unification and simplicity in our basic physical laws. Here the argument often seems to be that since SUSY/GUTs/string theory were such beautiful elegant ideas, their failure means the whole elegance thing is misguided. Another point of view (which I think someone wrote a book about) would be that these always were heavily oversold as “elegant”, since if you looked into them they were rather complicated and didn’t explain much. Writing in the anti-elegance vein are experimentalist Sarah Demers:

It is time for us to admit that some of the models we have been chasing from our brilliant theory colleagues might actually be (gorgeous) Hail Mary passes to the universe.

along with Marcelo Gleiser and Gregory Benford. At this particular time in intellectual history, it seems that hardly anyone has anything good to say about mathematical elegance as a powerful principle behind deep ideas about physics.

Finally, the biggest contingent are the multiverse maniacs. There’s Andrei Linde, who deals with the problem of evidence for his ideas by:

A pessimist would argue that since we do not see other parts of the universe, we cannot prove that this picture is correct. An optimist, on the other hand, may counter that we can never disprove this picture either, because its main assumption is that other “universes” are far away from us.

He’s joined by Sean Carroll, who wants to do away with the Popperazi and their inconvenient demands for falsifiable predictions. Also writing in support of the idea of a multiverse of different physical laws, implying we’ll have to give up on the idea of understanding more about the ones we see are Lawrence Krauss and Seth Lloyd.

Update: A couple more late additions that I missed. Eric Weinstein is with me in going after “string theory is the only game in town” as something that should have been retired long ago. Alan Guth uses this venue to promote some recent speculative work on the arrow of time with Sean Carroll (no paper yet, so hard to tell what it really is).

Update: Sean Carroll has a blog posting up about his argument for getting rid of falsifiability. He seems to not be getting a lot of support, either in his comment section (see for instance here), or places like here. I don’t think the skeptic community is ready to disarm itself intellectually in arguments against religious believers by ditching the conventional scientific method.

Update: Scott Aaronson writes here about the falsifiability issue, pointing out about string theory/multiverse research that

I wouldn’t know how to answer a layperson who asked why that wasn’t exactly the sort of thing Sir Karl was worried about, and for good reason.

Sean Carroll responds that the problem here is

somber pronouncements about non-falsifiability from fuddy-duddies.

Posted in Multiverse Mania | 70 Comments

Scientists Find a Practical Test for String Theory

This sort of thing seemed to be dying down (2013 required a record low number of “This Week’s Hype” postings), but 2014 is starting off with the usual promotion by physicists of nonsense about how they have “found a test for string theory”. This time the news that Scientists find a practical test for string theory comes from a group at Towson University, who are basing their claims on this paper, published here. I’m not sure where phys.org got this, but it reads like a university press release, and they credit “Provided by Towson University”.

What’s actually in the paper is a proposal for a test (and not a very good one, as far as I can tell…) of the equivalence principle. The claim is then that a violation of the equivalence principle would be evidence for string theory. I’ve written about this kind of claim before (see here), pointing out that string theorists sometimes argue that the equivalence principle is a prediction of string theory. So, string theory can be tested, and the test is even “practical”, but since the prediction is that either the equivalence principle will be violated or not, it’s pretty likely to pass the test.

Update: Another source for the press release is here.

Update: Matt Strassler weighs in, a week later:

Baloney. Hogwash. Garbage.

Posted in This Week's Hype | 46 Comments

Short Items

  • Harvard has announced that the Chinese firm Evergrande Group will be supporting various activities at Harvard, including a new Center for Mathematical Sciences and Applications, with S.-T. Yau as director. No details of what the center will do other than “serve as a fusion point for mathematics, statistics, physics, and related sciences.” The company has its own announcement here (they might want to check on the name of Harvard’s President…).
  • The new Physics Today has an article Paul Ehrenfest’s final years, a sad bit of physics history I’d never seen the details of.
  • Last month in Moscow there was a conference for Boris Feigin’s 60th birthday. Videos of the talks are now available here.
  • Dick Gross’s wonderful lecture series here at Columbia on Representation theory and number theory has been available on video since he gave the lectures. Now Chao Li at Harvard has produced a transcription of the talks, so a high-quality written version of the material of the lectures is now available. This is one of the best sources around to learn about the local Langlands conjectures. His website contains a lot of other interesting expository material.
  • Phenomenologist Jay Wacker has a blog at Quora, called Particle Physics Digressions. The latest entry is an odd tale of something I would have thought was rather unusual, but Wacker says it’s not exceptional, happens everyday.
Posted in Uncategorized | 20 Comments

Acknowledgments

Two of the prominent string theorists working on ideas about holography and cosmology featured in Amanda Gefter’s new book are Tom Banks and Willy Fischler, who have a new paper out on the subject, entitled Holographic Space-time and Newton’s Law. Besides the usual sort of thing, this paper contains a rather unusual acknowledgments section (hat-tip, the Angry Physics blog):

The work of T.B. was supported in part by the Department of Energy. The work of W.F. was supported in part by the TCC and by the NSF under Grant PHY-0969020. However, the authors do not thank either of these agencies, nor their masters, for the caps placed on their summer salaries, nor for the lack of support of basic research in general.

It seems that while debating philosophical issues concerning holography and cosmology can put one at the upper end of the current academic star system pay scale, it doesn’t stop one from getting embittered that it’s not enough. The authors did revise this text a few days later to remove the complaints.

For those who don’t know what this is all about, prominent theoretical physicists (and mathematicians) in the US generally have research grants that pay them not only research expenses, but “summer salary”. Historically, the reasoning behind this was that academics needed to teach during the summer to make ends meet, so agencies like the NSF would get them more time to do research by paying them to not teach. That was long ago, in a distant era. At this point the typical sums universities pay for summer courses are so much smaller than the academic-year salaries of successful senior academics that few would consider dramatically increasing their teaching load this way to make a little extra money.

Taking the NSF as an example, the standard computation is that an academic’s salary is considered just pay for nine months, with the NSF allowing grants to pay for up to two months of summer salary. In other words, grant applications can include a request for 2/9ths of a person’s salary, to be paid as additional compensation in return for not teaching summer school. As the salaries of star academics (who are the ones most likely to get grants) have moved north of 200K/year, these additional salary amounts have gotten larger and larger, crowding out the other things grants pay for (post-doc salaries and grad-student support are the big items).

Several years ago the mathematics part of the NSF instituted a “salary cap” on these payments, limiting them to about \$25K/year. This year, in response to declining budgets, such a cap was put on payments to theoretical physicists, at \$15K/month. So, any theorist with an academic year salary of over $135K/year saw a reduction in their additional compensation (although as far as I know only two were so outraged by this that they complained in the acknowledgments sections of their papers). The report of this year’s panel on the future of particle theory in the US includes the language:

This past year, the DOE instituted caps on summer salaries, and the NSF is following suit. We agree that this is preferable to further cuts in student and postdoctoral support, but it should be noted that still lower caps will have implications for research productivity, particularly if they reach the level of junior faculty (assistant or associate professor salaries). Many researchers may have to supplement their income with further teaching or other responsibilities in the summers.

Since Banks and Fischler work at public universities, one can check for oneself that they are seriously impacted by the new caps. Fischler is at the University of Texas, Banks has positions at UC Santa Cruz and Rutgers (I have no idea how the two institutions split his salary). Some of the grant information is also publicly available, for instance the NSF grant referred to in the acknowledgment is this one. It expires soon, but was supposed to provide \$690K over three years, presumably including summer salary for Fischler, Weinberg and three others. One anomaly here is Weinberg, who at over $500K/year is likely the highest paid theorist in the US. The same people have a new grant recently awarded, for \$220K.

Posted in Uncategorized | 32 Comments

Trespassing on Einstein’s Lawn

Amanda Gefter, a science writer who has often covered theoretical physics topics for New Scientist, has a new book coming out soon, Trespassing on Einstein’s Lawn. On one level it’s a memoir, telling a story that begins with her father getting her interested in fundamental questions about physics. This led to a career interviewing well-known physicists and writing about these topics, and now, a book. Self-reflexivity is a major theme of the book, with one aspect of this the way it tells in detail the story of its own genesis and creation.

In many ways, it’s comparable to last year’s book by Jim Holt, Why Does the World Exist?, with both books motivated by versions of the question “Why is there something rather than nothing?” In both books, there’s a memoir aspect, with the author front and center in a search for answers that involves meetings and discussions with great thinkers. For Holt, these were mostly philosophers with a few physicists thrown in, while for Gefter they’re mostly physicists, with a few philosophers making an appearance. These are lively, entertaining writers with wonderful material about deep questions, and I greatly enjoyed both books. Gefter is the funnier of the two, and I had trouble putting the book down after it arrived in my mail a couple days ago.

For those familiar with the topics she covers, the descriptions of her encounters with famous physicists is what will most likely provide something new. A few examples:

  • She somehow managed to get to moderate a private debate between Lenny Susskind and David Gross, mainly on the topic of the multiverse. Much of the result is familiar to anyone following the topic over the last ten years (Gross detests the multiverse, Susskind is madly in love with it), but one interesting aspect is Gross’s comparison of Susskind’s behavior to his own back in 1984-5:

    What I’m saying… is that some of the reaction is exactly like the reaction I got for exuberance in 1984, when we believed the answer was around the corner and we got carried away with that position. And, Lenny, you are carried away with this position. The stakes are damn big. So you are open to severe criticism.

    So, it seems that Gross is accusing Susskind of engaging in hype deleterious to physics, while acknowledging that he did much the same thing to get string theory unification off the ground and widely accepted.

  • Several string theorists pointed out that strings themselves have pretty much disappeared from the story. The emphasis is now on the holographic principle and the hope for some unknown M-theory that embodies it. About M-theory, Polchinski has this to say:

    It’s remarkable to know so much about many limits and yet have no good idea of what they are limits of! Holography is clearly part of the answer. The fundamental variables are probably very nonlocal, with local objects emerging dynamically.

    Witten tells Gefter that the “M” in “M-theory” really was intended to refer to membranes. He doesn’t see much happening though as far as new ideas about understanding it:

    …in the mid-eighties and mid-nineties, before the second revolution happened, there were kind of hints that something was going to happen – I didn’t know what, of course. I don’t have that feeling now, but perhaps other people do… If I had my druthers I’d like to go deeper into what’s behind the dualities, but that’s really hard.

  • John Wheeler plays a large role in Gefter’s story, which starts with her asking him a question at this conference in 2002. She has a fascinating description of Wheeler’s journals, which have been preserved in Philadelphia, where she and her father spent quite a lot of time looking through them.

The list of interviewees includes also Kip Thorne, Raphael Bousso, Tom Banks, and Carlo Rovelli.

Gefter makes it clear that she started out with essentially no background in physics or math, other than enthusiasm shared with her father for speculation about “nothingness” and the like. She studied not physics, but philosophy of science at the London School of Economics. Despite this lack of technical training, she does a good job of accurately characterizing what the physicists she talked to had to say. Towards the end the book does suffer a bit as she moves away from reporting what others are telling her to expounding her own interpretation of what it all means.

While I liked the book, at the same time I found the whole project deeply problematic, and would have reservations about recommending it to many people, especially to the impressionable young. The part of physics that fascinates Gefter is the part that has gone way beyond anything bound by the conventional understanding of science. This is really and truly “post-modern physics”, completely unmoored from any connection to experiment (the discovery of the Higgs in the middle of the period she is writing about just gets a short footnote). The questions being discussed and answers proposed are woolly in the extreme, focused on issues at the intersection of cosmology and quantum mechanics, suffering from among other things our lack of a convincing quantum theory of gravity. Gefter seems to be sure that the problem of quantum gravity is an interpretational one of how to talk about a quantum cosmology where observers are part of the system. The very different, much more technical issue of how to consistently quantize metric degrees of freedom in a unified way with the Standard Model fields is ignored, perhaps with the idea that this has been solved by string theory.

Not recognizing that this post-modern way of doing science is deeply problematic and leading the field into serious trouble isn’t so much Gefter’s fault as that of the experts she speaks to (David Gross is an exception). Those taking the field down this path are dominating public coverage of the subject, and often finding themselves richly rewarded for engaging not in sober science but in outrageous hype of dubious and poorly-understood ideas. Only the future will tell whether the significance of this book will end up being that of an entertaining tale of some excesses from a period when fundamental physics temporarily lost its way, or a sad document of how a great science came to an end.

Posted in Book Reviews | 11 Comments

Trust the math? An Update

Back in September, I wrote here about the news that Snowden’s revelations that confirmed suspicions that back in 2005-6 NSA mathematicians had compromised an NIST standard for elliptic-curve cryptography. The new standard was promoted as an improvement using sophisticated mathematical techniques, when these had really just been used to introduce a backdoor allowing the NSA to break encryption using this standard. There still does not seem to have been much discussion in the math community of the responsibility of mathematicians for this (although the AMS this month is running this opinion piece).

After my blog post, some nice detailed descriptions of how this was done and the mathematics involved appeared. See for instance The Many Flaws of Dual_EC_DRBG by Matthew Green, and The NSA back door to NIST by Thomas Hales. The Hales piece will appear soon in the AMS Notices. Hales also has a more recent piece, Formalizing NIST Standards, which argues for the use of formal verification methods to check such standards. Also appearing after my blog post was the news that RSA Security was now advising people not to use one of its products in default mode, the BSAFE toolkit.

One mystery that remained was why the NIST had promulgated a defective standard, knowing full well that experts were suspicious of it. Also unclear was why RSA Security would include a suspicious standard in their products. Back in September they told people that (see here) they had done this because:

The hope was that elliptic curve techniques—based as they are on number theory—would not suffer many of the same weaknesses as other techniques

and issued a statement saying:

RSA always acts in the best interest of its customers and under no circumstances does RSA design or enable any backdoors in our products. Decisions about the features and functionality of RSA products are our own.

Today there are new revelations about this (it’s unclear from what source), which explain what helped make RSA swallow the bogus mathematics: a payment from the NSA of \$10 million. I guess there’s a lesson in this: when you can’t figure out why someone went along with a bad mathematical argument, maybe it’s because someone else gave them \$10 million…

Update
: For another explanation of the math behind this, see videos here and here featuring Edward Frenkel.

Update: There’s a response to the Reuters story from RSA here. As I read it, it says

  • They do have a secret contract with the NSA that they cannot discuss
  • They used the NSA back-doored algorithm in their product because they trust the NSA
  • They didn’t remove it when it became known because they really are incompetent, not because the NSA was paying them to act incompetent

It’s hard to see why anyone would now trust their products.

Posted in Uncategorized | 27 Comments