Bogdanov Update

Earlier this year the Bogdanov brothers published Le Visage de Dieu, yet another book dealing with their ideas about the big-bang and pre-big-bang physics. This follows their earlier Avant le Big Bang, and L’équation Bogdanov : Le secret de l’origine de l’Univers? a book that they somehow got Lubos Motl to take credit for. I haven’t seen the new book, but it seems that it takes off from George Smoot’s comment that looking at the CMB was like “seeing the face of God.” Somehow the brothers have managed to get well-known cosmologists Robert Wilson, Jim Peebles and John Mather to contribute pieces to the book.

In other Bogdanov news, I hear from a Paris correspondent that an interesting document has recently come to light, a report commissioned by the CNRS after the Bogdanov Affair made headlines back in 2002. For press coverage of this, see here, here, here. The CNRS critique, from November 2003 is detailed, harsh and devastating. For instance, about Igor Bogdanov’s thesis (which ended up being published in multiple copies in several respectable journals, including the highly thought of Classical and Quantum Gravity), the conclusion is “la valeur de ce travail est nulle”, this work is of no value.

Update: For another informative article about the Bogdanovs, from this summer, see here.

Posted in Uncategorized | 22 Comments

The Gathering Storm: Category 5

Back in 2005 an illustrious group was organized to produce a report addressing the state of science and technology in the United States, resulting in what became known as the “Gathering Storm” report since it was entitled Rising Above the Gathering Storm. This report recommended that various steps should be taken to increase the number of science Ph.D.s produced in the US and going into the US labor market (while noting that there was no evidence of a shortage of such Ph.D.s).

Last month the group was back, now claiming that the gathering storm has become a hurricane of nearly category 5 intensity, with a new report entitled Rising Above the Gathering Storm, Revisited: Rapidly Approaching Category 5. Yesterday they appeared before the House Science and Technology Committee. In an analysis of what happened to their recommendations, they noted that the call for more Ph.D.s had been effective, with the NSF spending $475 million on graduate student funding during FY 2009-2010. As for the effect of this on their goal of more well-paid jobs for Americans, here’s what they had to say:

A paradox exists in the debate over whether there is a shortage of scientists and engineers or whether there are too many scientists and engineers for the jobs that are available. Most business leaders maintain the former; however, with regard to the more “conventional” functions of these fields it may well be that de facto there can no longer be domestic shortages of scientists and engineers. Firms facing this proposition are simply moving work elsewhere. Similarly, the observation that many scientists and engineers elect to pursue careers in other fields is in many instances simply reflective of the value placed on education in these disciplines by business, law, and medical schools and related employers and should not necessarily be decried. However, if the sole purpose of a PhD in science is considered to be to prepare future educators in science, then a surplus of scientists (often evidenced as a surplus of Post-Doctorate researchers) seems inevitable. The Gathering Storm recommendations are based upon the premise that federal investment in research must be doubled (the report’s second highest priority recommendation)—in which case there will be commensurate increases in demand for researchers . . . and not solely for the purpose of providing educators.

It seems that the idea is that while there’s no Ph.D. shortage at the moment, the Congress will double funding for scientific research over the next few years, so just maybe there could be a shortage in the future and this must be addressed right now.

As for the “paradox” that business leaders see a shortage of the kind of trained scientists and engineers they would like to hire at the wages they would like to pay, it appears to be the same paradoxical shortage I regularly encounter of first-class plane tickets to Paris available at the price I would like to pay for them.

In the real world, the latest Notices of the AMS has data showing the number of mathematics graduate students increasing from 10,883 in fall 2008 to 11,268 last fall. The situation graduating students face is described as:

The job market for doctoral mathematicians took a decided turn for the worse during the 2008-2009 hiring season. For all mathematics departments combined, the number of full-time positions under recruitment during 2008-2009 for employment beginning in fall 2009 decreased 27%, dropping to 1,464 from 2,012 reported last year. This is smallest number of such positions reported since 1997 when it was 1,246. The number of tenured/tenure-track positions under recruitment during this period was 930, down 23% from the previous year’s figure of 1,213. The number of full-time positions filled was 1,274, with 710 of these tenured/tenure-track positions. These figures are down 30% and 27%, respectively, from the figures reported for the 2007-2008 hiring season.

For all mathematics departments combined, the number of new doctoral recipients hired for positions beginning in fall 2009 was down 13% from the previous year’s number, to 656. Likewise, there was a decrease in the number of new doctoral recipients obtaining tenure-track positions for fall 2009 with 301 such hirings reported compared to 378 reported for fall 2008.

Posted in Uncategorized | 21 Comments

Various and Sundry

Yet another random collection of topics of possible interest:

  • Things have been going well at the LHC recently, and there’s a new dashboard page at which progress can be followed. For the latest from the LHC, see the talks at last week’s LHC Days in Split. The LHC machine is discussed here, with the news that restart of proton-proton collisions (after a November 1 stop for a heavy-ion run and holiday shutdown) is tentatively scheduled for February 4. Long-term plans for the machine are covered here, including projected running at 6.5 TeV/beam in 2013, 7 TeV/beam in 2014, and a possible rebuilding of the machine with new magnets that would give 16.5 TeV/beam in 2031.

    Now that the experiments have 10 inverse picobarns of data, the search for supersymmetry can begin in earnest, and various talks cover this. According to Maria Spiropulu of CMS “the time between O(10) and O(100) inverse picobarns of well-understood data will be critical for the discovery and characterization of SUSY”.

    The other hot topic is that of how well the LHC will be able to compete with the Tevatron for discovery of the Higgs. Tommaso Dorigo discusses this here and here, using LHC projections given here.

  • In news of non-scientific projects of mathematicians and physicists, Edward Frenkel has a screenplay out called The Two-body Problem. Lisa Randall has curated an exhibition in LA entitled Measure for Measure.
  • Frank Wilczek is working on a murder mystery novel to be called The Attraction of Darkness, which will mix “science, music, sex, and murder.” There was a recent Bloggingheads conversation with him here. His response when asked about his take on string theory: “It needs work.”
  • For commentary from Charles Day, an editor as Physics Today, about why their coverage of string theory has been sparse, see here. For Clifford Johnson’s commentary on the commentary, see here.
  • Later this month the Princeton Center for Theoretical Science will have a workshop on Rare Events in Computational, Financial and Physical Sciences, co-sponsored by the hedge fund D.E. Shaw. D.E. Shaw has been a large employer of mathematicians and physicists, but recently hasn’t been doing so well, announcing the firing of about 10% of their staff.

    The new documentary about the financial crisis that just came out, Inside Job, is surprisingly good, I highly recommend it.

  • Past proceedings of the International Congress of Mathematicians are now available on-line here.
  • Posted in Uncategorized | 16 Comments

    Grading String Theory

    Commenter Shantanu points to this video of a recent colloquium by Andy Strominger at Harvard, which includes some extensive comments on the current state of string theory. Strominger is one of the most prominent string theorists in the business, and has been working in the field for more than a quarter-century since the first “Superstring Revolution” of 1984. In the talk (at about the 52 minute point), Strominger gives a “report card” for string theory, where he assigns it 3 As, 2 Bs, 3 Ds and 2 Fs, for an average grade of about C. It gets an F for making no unambiguous testable predictions, a D for prospects of saying anything about LHC, an F for the CC (Strominger isn’t sold on the anthropic landscape) and a D for cosmology. Some of the high grades are debatable, with an audience member pointing out to him that there was a tension between his A for “Not being ruled out as theory of nature” and F for no testable predictions. Strominger repeatedly claimed that most string theorists would agree with him on these grades (except maybe the F for the CC).

    As an overall evaluation, he said that it was debatable whether this was a passing or failing report card, then arguing:

    But this is the only student in class, so if you flunk her you have to shut the school down.

    Along the same lines, a bit earlier in the talk one of his slides characterizes all that theorists can do as “go home and watch TV” if they believe in the landscape (“String theory is everything”) or if they think string theory is a failure (“String theory is nothing”). The positive argument he was trying to make is that there still is something for string theorists to do even after they are forced to give up on particle physics: they can try applying AdS/CFT and black holes to other areas of physics (nuclear physics, solid state physics, fluid mechanics).

    I think Strominger is right that his grades and point of view about string theory are now conventional wisdom among leading theorists. What I find striking about this is the argument that if you are forced to give up on string theory, you have to “shut the school down” or “go home and watch TV”. More than 25 years of working on string theory has left Strominger and others somehow believing that there is no conceivable alternative. The failure of string theory as a theory of particle physics leads them to the conclusion that they must not abandon string theory, but instead must abandon particle physics and try and apply string theory to other fields. The obvious conclusion that string theory is just one speculative idea, and that its failure just means you have to try others, is one that they still do not seem willing to face up to.

    Posted in Uncategorized | 42 Comments

    NRC Rankings

    One thing I’ve learned in life is that human beings are creatures very much obsessed with social hierarchy, and academics are even more obsessed with this than most people. So, any public rankings that involve oneself and the institutions one is part of tend to have a lot of influence. In the US, US News and World Report each year ranks the “Best Colleges”, see the rankings for National Universities here. My university’s administration tended in the past to express skepticism about the value of this ranking, which typically put us tied for 8/9 or 8/9/10. This year however, everyone here agrees that there has been a dramatic improvement in methodology, since we’re at number 4.

    For most academics though, the real ranking that matters is not that of how good a job one’s institution does in training undergraduates, but the ranking of the quality of research in one’s academic field. Where one’s department fits in this hierarchy is crucial, affecting one’s ability to get grants, how good one’s students are and whether they can get jobs, even one’s salary. The gold standard has been the National Research Council rankings, which were supposed to be revised about every ten years. It turns out though that the task of making these ranking has somehow become far more complex and difficult, with more than fifteen years elapsing since the last rankings in 1995. Since 2005 there has been a large and well-funded project to generate new rankings, with release date that keeps getting pushed back. Finally, last year a 200 page book was released entitled A Guide to the Methodology of the National Research Council Assessment of Doctorate Programs, but still no rankings.

    Recently the announcement was made that all will be revealed tomorrow at a press conference to be held in Washington at 1pm EDT. I hear rumors that university administrations have been privately given some of the numbers in advance, to allow the preparation of appropriate press releases (see here for an example of a university web-site devoted to this issue).

    The data being used was gathered back in 2005-2006, and the five intervening years of processing mean that it is rather stale, since many departments have gained or lost academic stars and changed a lot during these years. So, no matter what happens, a good excuse for ignoring the results will be at hand.

    Update: University administrations have now had the data for a week or so and are providing it to people at their institutions. For an example of this, see the web-site Berkeley set up here. All you need is a login and password….

    Update: (Via Dave Bacon) Based on the confidential data provided to them last week, the University of Washington Computer Science and Engineering department has released a statement characterizing this data as having “significant flaws”, and noting that:

    The University of Washington reported these issues to NRC when the pre-release data was made available, and asked NRC to make corrections prior to public release. NRC declined to do so. We and others have detected and reported many other anomalies and inaccuracies in the data during the pre-release week.

    The widespread availability of the badly flawed pre-release data within the academic community, and NRC’s apparent resolve to move forward with the public release of this badly flawed data, have caused us and others to urge caution – hence this statement. Garbage In, Garbage Out – this assessment is based on clearly erroneous data. For our program – and surely for many others – the results are meaningless.

    The UW Dean of the College of Engineering has a statement here where he claims that, despite 5 years of massaging, the NRC data contained obvious nonsense, such as the statistic that 0% of their graduating CS Ph.D. students had plans for academic employment during 2001-5.

    Update: Boston University has broken the embargo with this press release. They give a chart showing that almost all their graduate programs have dramatically improved their ranking since the 1995 rankings, while noting that the two rankings are based on different criteria, the NRC says you can’t compare them, and the 2010 rankings in the chart are not NRC numbers, but are based on their massaging of the data. I suspect that the NRC data will be used to show that, like the kids in Lake Wobegon, all programs are above average.

    For more on this story, see coverage by Steinn Sigurdsson at Dynamics of Cats.

    Update: The NRC data is out, and available from its web-site. But, no one really cares about that, all they care about are the rankings, and the NRC is not directly putting those out. Instead, they’ve subcontracted the dirty work to phds.org, where you can get rankings here, using either the “regression-based” or “survey-based” score. In mathematics and physics, the lists you get are about what one would expect, with perhaps somewhat of a tilt towards large state schools compared to the 1995 rankings (most dramatically, Penn State was number 36 in 1995, number 8 or 9 this year).

    Posted in Uncategorized | 19 Comments

    The End of Time

    I’ve been critical of multiverse pseudo-science because it doesn’t make any testable predictions, but it seems that tonight there really is one. According to this new preprint, multiverse arguments guarantee that time will end, with the expected amount of time left before the end about 5 billion years and

    There is a 50% chance that time will end within the next 3.3 billion years.

    The argument seems to be that multiverse arguments require introducing an artificial cut-off to get finite numbers, so the cut-off must be there and we’re going to hit it relatively soon on cosmological time scales. The age of the universe is about 13.75 billion years, but we’re getting near the end, already entering late middle-age to senior-citizen time-frame. One interpretation given of this result is that:

    we are being simulated by an advanced civilization with a large but finite amount of resources, and at some point the simulation will stop.

    It turns out that you don’t even need the whole apparatus of eternal inflation to see that time is going to end. All you need to do is to think about sleeping and waking up, which, according to the paper, leads to the “Guth-Vanchurin” paradox:

    Suppose that before you go to sleep someone flips a fair coin and, depending on the result, sets an alarm clock to awaken you after either a short time or a long time. Local physics dictates that there is a 50% probability to sleep for a short time since the coin is fair. Now suppose you have just woken up and have no information about how long you slept. It is natural to consider yourself a typical person waking up. But if we look at everyone who wakes up before the cutoff, we find that there are far more people who wake up after a short nap than a long one. Therefore, upon waking, it seems that there is no longer a 50% probability to have slept for a short time.

    How can the probabilities have changed? If you accept that the end of time is a real event that could happen to you, the change in odds is not surprising: although the coin is fair, some people who are put to sleep for a long time never wake up because they run into the end of time first. So upon waking up and discovering that the world has not ended, it is more likely that you have slept for a short time. You have obtained additional information upon waking – the information that time has not stopped – and that changes the probabilities.

    However, if you refuse to believe that time can end, there is a contradiction. The odds cannot change unless you obtain additional information. But if all sleepers wake, then the fact that you woke up does not supply you with new information.

    Update: Lubos doesn’t think much of the paper:

    But holy crap, if physicists don’t lose all of their scientific credit by publishing this pure garbage and nothing else for years, can they lose their credibility at all? Does the institutionalized science have any checks and balances left? I think that all the people are being bullied into not criticizing the junk written by other people who are employees of the academic system, especially if the latter are politically correct activists. And be sure, some of the authors of this nonsense are at the top of it.

    This is just bad. I urge all the sane people in Berkeley and other places to make it very clear to Bousso et al. – and to students and other colleagues – that they have gone completely crazy.

    Update: In other pseudo-science news, the latest Scientific American features a piece by Hawking and Mlodinow based on their recent book.

    Update: Not only does New Scientist think this nonsense deserves to be covered in a lead article, they also have an editorial urging us not to “roll your eyes” about this.

    Posted in Multiverse Mania | 49 Comments

    LHC Update

    There’s been great progress made recently at the LHC, with successful commissioning of “trains” of bunches, allowing significantly higher collision rates. Last night’s fill produced an integrated luminosity of .684 inverse picobarns (or 684 inverse nanobarns), which can be compared to the total integrated luminosity up until this week of about 3.5 inverse picobarn. The highest instantaneous luminosity reached is now at about 1/5 the goal set for this year, with a further increase in number of bunches planned for this weekend. For more details, there’s a message from the CERN DG here.

    Update: The latest fill, with 104 bunches, recently started, with an initial luminosity now at 1/3 of the goal for this year.

    Update: For more information about latest events at the LHC and upcoming plans, see here. Latest luminosity plots are here, now including highest instantaneous luminosity.

    Posted in Experimental HEP News | 8 Comments

    Tevatron Funding

    The Fermilab web-site today has a message from Director Oddone about prospects for funding an extension of the Tevatron run after FY2011, as recommended by the Physics Advisory Committee. He has asked the DOE for additional funding of $35 million/year to pay part of the cost of an extended run, with the rest to come out of slowing down other planned experiments. If the DOE turns this down, it seems the plan is to shutdown the Tevatron next year.

    This leaves prospects for the Tevatron’s future very much up in the air, especially given the dysfunctional nature of the US federal budget process. With FY2011 about to begin, the Congress has yet to pass a budget, shows no signs of doing so anytime soon, and is in the middle of an election campaign dominated by calls for cutting federal spending. The general assumption is that they’ll deal with this by continuing funding at FY2010 levels, until finally getting around to passing appropriations bills all together as part of an “omnibus” bill sometime deep into the fiscal year. The process of dealing with the FY2012 budget starts next February with the President’s budget request, but again there’s no reason to believe there actually will be a budget until long after they’ve already started spending the money. Luckily, the Fermilab people by now have many years of experience dealing with this system.

    Posted in Experimental HEP News | 9 Comments

    The Shape of Inner Space

    Besides the Hawking book, which was a disappointment in many ways, I recently also finished reading a much better and more interesting book which deals with some of the same topics, but in a dramatically more substantive and intelligent manner. The Shape of Inner Space is a joint effort of geometer Shing-Tung Yau and science writer Steve Nadis. Yau is one of the great figures in modern geometry, a Fields medalist and current chair of the Harvard math department. He has been responsible for training many of the best young geometers working today, as well as encouraging a wide range of joint efforts between mathematicians and physicists in various areas of string theory and geometry.

    Yau begins with his remarkable personal story, starting out with a childhood of difficult circumstances in Hong Kong. He gives a wonderful description of the new world that opened up to him when he came to the US as a graduate student in Berkeley, where he joyfully immersed himself in the library and a wide range of courses. Particularly influential for his later career was a course by Charles Morrey on non-linear PDEs, which he describes as losing all of its students except for him, many off to protest the bombing of Cambodia.

    He then goes on to tell some of the story of his early career, culminating for him in his proof of the Calabi conjecture. This conjecture basically says that if a compact Kahler manifold has vanishing first Chern class (a topological condition), then it carries a unique Kahler metric satisfying the condition of vanishing Ricci curvature. It’s a kind of uniformisation theorem, saying that these manifolds come with a “best” metric. Such manifolds are now called “Calabi-Yau manifolds”, and while the ones of interest in string theory unification have six dimensions, they exist in all even dimensions, in some sense generalizing the special case of an elliptic curve (torus) among two-dimensional surfaces.

    Much of the early part of the book is concerned not directly with physics, but with explaining the significance of the mathematical subject known as “geometric analysis”. Besides the Calabi conjecture, Yau also explains some of the other highlights of the subject, which include the positive-mass conjecture in general relativity, Donaldson and Seiberg-Witten theory, and the relatively recent proof of the Poincare conjecture. Some readers may find parts of this heavy-going, since Yau is ambitiously trying to explain some quite difficult mathematics (for instance, trying to explain what a Kahler manifold is). Having tried to do some of this kind of thing in my own book, I’m very sympathetic to how difficult it is, but also very much in favor of authors giving it a try. One may end up with a few sections of a book that only a small fraction of its intended audience can really appreciate, but that’s not necessarily a bad thing, and arguably much better than having content-free books that don’t even try to explain to a non-expert audience what a subject is really about.

    A lot of the book is oriented towards explaining a speculative idea that I’m on record as describing as a failure. This is the idea that string theory in ten-dimensions can give one a viable unified theory, by compactification of six of its dimensions. When you do this and look for a compact six-dimensional manifold that will preserve N=1 supersymmetry, what you find yourself looking for is a Calabi-Yau manifold. Undoubtedly one reason for Yau’s enthusiasm for this idea is his personal history and having his name attached to these. Unlike other authors though, Yau goes into the question in depth, explaining many of the subtleties of the subject, as well as outlining some of the serious problems with the idea.

    I’ve written elsewhere that string theory has had a huge positive effect on mathematics, and one source of this is the array of questions and new ideas about Calabi-Yau manifolds that it has led to. Yau describes a lot of this in detail, including the beginnings of what has become an important new idea in mathematics, that of mirror symmetry, as well as speculation (“Reid’s fantasy”) relating the all-too-large number of classes of Calabi-Yaus. He also explains something that he has been working on recently, pursuing an idea that goes back to Strominger in the eighties of looking at an even larger class of possible compactifications that involve non-Kahler examples. One fundamental problem for string theorists is that of too many Calabi-Yaus already, so they’re not necessarily enthusiastic about hearing about more possibilities:

    University of Pennsylvania physicist Burt Ovrut, who’s trying to realize the Standard Model through Calabi-Yau compactifications, has said he’s not ready to take the “radical step” of working on non-Kahler manifolds, about which our mathematical knowledge is presently quite thin: “That will entail a gigantic leap into the unknown, because we don’t understand what these alternative configurations really are.”

    Even in the simpler case of Calabi-Yaus, a fundamental problem is that these manifolds don’t have a lot of symmetry that can be exploited. As a result, while Yau’s theorem says a Ricci-flat metric exists, one doesn’t have an explicit description of the metric. If one wants to get beyond calculations of crude features of the physics coming out of such compactifications (such as the number of generations), one needs to be able to do things like calculate integrals over the Calabi-Yau and this requires knowing the metric. Yau explains this problem, and how it has hung up any hopes of calculating things like fermion masses in these models. He gives a general summary of the low-level of success that this program has so far achieved, and quotes various string theorists on the subject:

    But there is considerable debate regarding how close these groups have actually come to the Standard Model… Physicists I’ve heard from are of mixed opinion on this subject, and I’m not yet sold on this work or, frankly, on any of the attempts to realize the Standard Model to date. Michael Douglas… agrees: “All of these models are kind of first cuts; no one has yet satisfied all the consistency checks of the real world”…

    So far, no one has been able to work out the coupling constants or mass…. Not every physicist considers that goal achievable, and Ovrut admits that “the devil is in the details. We have to compute the Yukawa couplings and the masses, and that could turn out completely wrong.”

    Yau explains the whole landscape story and the heated debate about it, for instance quoting Burt Richter about landscape-ologists (he says they have “given up.. Since that is what they believe, I can’t understand why they don’t take up something else – macrame for example.”) He describes the landscape as a “speculative subject” about which he’s glad to, as a mathematician, not have to take a position:

    It’s fair to say that things have gotten a little heated. I haven’t really participated in this debate, which may be one of the luxuries of being a mathematician. I don’t have to get torn up about the stuff that threatens to tear up the physics community. Instead, I get to sit on the sidelines and ask my usual sorts of questions – how can mathematicians shed light on this situation?

    So, while I’m still of the opinion that much of this book is describing a failed project, on the whole it does so in an intellectually serious and honest way, so that anyone who reads it is likely to learn something and to get a reasonable, if perhaps overly optimistic summary of what is going on in the subject. Only at a few points do I think the book goes a bit too far, largely in two chapters near the end. One of these purports to cover the possible fate of the universe (“the fate of the false vacuum”) and the book wouldn’t lose anything by dropping it. The next chapter deals with string cosmology, a subject that’s hard to say much positive about without going over the edge into hype.

    Towards the end of the book, Yau makes a point that I very much agree with: fundamental physics may get (or have already gotten..) to the point where it can no longer rely upon frequent inspiration from unexpected experimental results, and when that happens one avenue left to try is to get inspiration from mathematics:

    So that’s where we stand today, with various leads being chased down – only a handful of which have been discussed here – and no sensational results yet. Looking ahead, Shamit Kachru, for one, is hopeful that the range of experiments under way, planned, or yet to be devised will afford many opportunities to see new things. Nevertheless, he admits that a less rosy scenario is always possible, in the even that we live in a frustrating universe that affords little, if anything in the way of empirical clues…

    What we do next, after coming up empty-handed in every avenue we set out, will be an even bigger test than looking for gravitational waves in the CMB or infinitesimal twists in torsion-balance measurements. For that would be a test of our intellectual mettle. When that happens, when every idea goes south and every road leads to a dead end, you either give up or try to think of another question you can ask – questions for which there might be some answers.

    Edward Witten, who, if anything, tends to be conservative in his pronouncements, is optimistic in the long run, feeling that string theory is too good not to be true. Though, in the short run, he admits, it’s going to be difficult to know exactly where we stand. “To test string theory, we will probably have to be lucky,” he says. That might sound like a slender thread upon which to pin one’s dreams for a theory of everything – almost as slender as a cosmic string itself. But fortunately, says Witten, “in physics there are many ways of being lucky.”

    I have no quarrel with that statement and more often than not, tend to agree with Witten, as I’ve generally found this to be a wise policy. But if the physicists find their luck running dry, they might want to turn to their mathematical colleagues, who have enjoyed their fair share of that commodity as well.

    Update: I should have mentioned that the book has a web-site here, and there’s a very good interview with Yau at Discover that covers many of the topics of the book.

    Update: There’s more about the book and an interview with Yau here.

    Posted in Book Reviews | 16 Comments

    Short Items

  • The Fermilab Physics Advisory Committee recently recommended that the Tevatron be kept running for an additional three years (until 2014). By the end of that time it should be able to accumulate a total of 20 fb-1 of data, which would give sensitivity to a standard model Higgs at the 3-sigma level over the entire interesting mass range. The cost for this would be a total of about \$150 million, which would likely have to come out of Fermilab’s $810 million/year budget. While the idea of continuing to do physics at the high-energy frontier and possibly beat the LHC to the Higgs, for less than 10% of the lab budget/year seems to be a no-brainer, director Oddone still may not be completely sold on the idea. Keeping the Tevatron going would set back some of the projects the lab has planned for its future in a post-Tevatron world. There’s also significant concern about the future federal budget situation, and how to make sure that the best possible case is made for a future of Fermilab, in an environment where people may be looking for large, expensive programs that could be cut. For more about this, see Adrian Cho’s article Higgs or Bust? in Science.

    One huge consideration in this decision is that of what will happen at the LHC. CERN is facing its own budgetary problems, and has just decided to shut down during 2012 not just the LHC (for repair of magnet interconnections), but the entire accelerator complex. Work continues this year on trying to raise the luminosity of the machine, but progress is slow. They still are an order of magnitude lower than where they want to be by the end of the year, with only a few more weeks left before the machine is shutdown as a proton-proton collider and reconfigured for a heavy-ion run. If all goes according to plan, by late 2011 the LHC would have 1 fb-1 of data, enough to compete with the Tevatron in the Higgs search. But, so far, plans like this have turned out to be overly optimistic, with things taking longer than expected.

    In today’s CERN Bulletin and Fermilab today, Oddone and CERN DG Heuer issued a joint statement downplaying the competition between their labs:

    The press makes much of the competition between CERN’s LHC and Fermilab’s Tevatron in the search for the Higgs boson. This competitive aspect is real, and probably adds spice to the scientific exploration, but for us such reporting often feels like spilling the entire pepper shaker over a fine meal.

  • CheapUniverses.com is now selling a Universe Splitter iPhone app for $1.99, complementing its other products. At $3.95, the Basic Universe:

    Using quantum physics, we split your universe into two branches, then we send you an email to inform you which branch you’re in.

    As celebrity endorser, they have Garrett Lisi explaining:

    The functioning of this app is in complete agreement with the many-worlds interpretation of quantum mechanics.

  • The author is always the last to know such things, but I’ve heard rumors that someone intends to bring out a Czech edition of Not Even Wrong.
  • High quality videos of talks from the Princeton IAS summer school on supersymmetry are available here.
  • In Langlands-related news, there’s an excellent new preprint by David Nadler about the fundamental lemma and Ngo’s proof. This is one of the most ferociously difficult topics to understand in current math research, and Nadler’s article is about the best expository piece on the subject that I’ve seen.

    This semester there’s a program on Langlands Duality in Representation Theory and Gauge Theory at Hebrew University.

    There’s a fascinating recent preprint by Kevin Buzzard and Toby Gee on The conjectural connections between automorphic representations and Galois representations. They conjecture a reciprocity sort of relation between algebraic automorphic representations and Galois representations, not just for GL(n), but for arbitrary reductive groups. This involves invoking a twist by “half the sum of the positive roots”, a phenomenon that arises in various places in representation theory, often indictating that spinors are involved (“half the sum of the positive roots” is the highest weight of the spinor representation).

  • Posted in Experimental HEP News, Langlands, Multiverse Mania, Not Even Wrong: The Book | 16 Comments