Various and Sundry

A few unrelated items:

  • Maybe Multiverse Mania in the popular press is fading. The Atlantic has The Multiverse Idea is Rotting Culture, where the author points out

    In 2014, the New Scientist published an article called “Multiverse Me,” revealing that various lonely boffins take succor from the fact that alternate versions of themselves are leading fun lives full of emotional and sexual fulfillment, instead of solitudinous slogs through the stupid infinity of high-level algebra.

    They’re not jealous; they want the best for their alternate selves, they want them to be happy. How can you help? The answers given are all cop-outs; the scientists have decided to keep on living as if the multiverse didn’t exist (“The multiverse,” one says, “tells us that we should behave as if we were valuing the risks according to probabilities in a classical universe”), because if it does exist the implications are horrifying. Right now, infinite versions of yourself are dying in really horrible ways, not in spite of the fact that you’re lazily giving answers to a New Scientist reporter, but because of it. Every second you live, their suffering increases. If you stand on a cliff-edge and decide not to die, how many billions are smashed on the rocks? Jump now, and save them all.

  • Over at Scientific American, Lee Billings has a story about dark matter and the lack of evidence for WIMPS.
  • At CASW Showcase, an interesting interview with Natalie Wolchover.
  • Carlo Rovelli has posted on the arXiv The dangers of non-empirical confirmation, his contribution to the Why Trust a Theory? meeting discussed here and here.
  • Next week there will be a Natifest at the IAS, celebrating Nati Seiberg’s 60th birthday. I’ll try and get down there for the first day, leaving on a trip (more about that to come) later in the week.

    There’s a wonderful discussion with Seiberg arranged by Hirosi Ooguri that is well worth reading.

  • Denny Hill wrote recently to tell me about an interesting article on the history of the study of gravitational wave solutions, by him and Pawel Nurowski, now on the arXiv here.

Update: One more. Congratulations to Nigel Hitchin, whose 70th birthday celebrations are now ongoing. See here, here and here.

Posted in Uncategorized | 24 Comments

Weapons of Math Destruction

Cathy O’Neil’s important new book Weapons of Math Destruction, is out today, and if you’re at all interested in the social significance of how mathematics is now being used, you should go out and get a copy. She has been blogging for quite a while at Mathbabe, which you should be following, and is a good place to start if your attention span is too short for the book.

Cathy has had an interesting career path, including a period as my colleague here in the math department at Columbia. She left here to pursue excitement and fortune at a top hedge fund, D.E. Shaw, where she had a front-row seat at the 2008 financial industry collapse. A large factor in that collapse was the role played by mathematical models, and her book explains some of that story (for another take on this, there’s Models.Behaving.Badly from another Columbia colleague, Emanuel Derman). As far as I’ve ever been able to figure out, the role of mathematical modeling in the mortgage backed securities debacle was as a straightforward accessory to fraud. Dubious and fraudulent lending was packaged using mathematics into something that could be marketed as a relatively safe investment, with one main role of the model that of making it hard for others to figure out what was going on. This worked quite well for those selling these things, with the models successfully doing their job of obscuring the fraud and keeping most everyone out of jail.

While this part of the story is now an old and well-worn one, what’s new and important about Weapons of Math Destruction is its examination of the much wider role that mathematical modeling now plays in our society. Cathy went on from the job at D.E. Shaw to work first in risk management and later as a data scientist at an internet media start-up. There she saw some of the same processes at work:

In fact, I saw all kinds of parallels between finance and Big Data. Both industries gobble up the same pool of talent, much of it from elite universities like MIT, Princeton and Stanford. These new hires are ravenous for success and have been focused on external metrics – like SAT scores and college admissions – their entire lives. Whether in finance or tech, the message they’ve received is that they will be rich, they they will run the world…

In both of these industries, the real world, with all its messiness, sits apart. The inclination is to replace people with data trails turning them into more effective shoppers, voters, or workers to optimize some objective… More and more I worried about the separation between technical models and real people, and about the moral repercussions of that separation. If fact, I saw the same pattern emerging that I’d witnessed in finance: a false sense of security was leading to widespread use of imperfect models, self-serving definitions of success, and growing feedback loops. Those who objected were regarded as nostalgic Luddites.

I wondered what the analogue to the credit crisis might be in Big Data. Instead of a bust, I saw a growing dystopia, with inequality rising. The algorithms would make sure that those deemed losers would remain that way. A lucky minority would gain ever more control over the data economy, taking in outrageous fortunes and convincing themselves that they deserved it.

The book then goes on to examine various examples of how Big Data and complex algorithms are working out in practice. Some of these include:

  • The effect of the US News and World Report algorithm for college ranking, as colleges try and game the algorithm, while at the same time well-off families are at work gaming the complexities of elite college admissions systems.
  • The effects of targeted advertising, especially the way it allows predatory advertisers (some for profit educational institutions, payday lenders, etc.) to very efficiently go after those most vulnerable to the scam.
  • The effects of predictive policing, with equality before the law replaced by an algorithm that sends different degrees of law enforcement into different communities.
  • The effects of automated algorithms sorting and rejecting job applications, with indirect consequences of discrimination against classes of people.
  • The effects of poorly thought-out algorithms for evaluating teachers, sometimes driving excellent teachers from their jobs .
  • The effects of algorithms that score credit, determine access to mortgages and to insurance, often with the effect of making sure that those deemed losers stay that way.

Finally, there’s a chapter on Facebook and the way political interests are taking advantage of the detailed information it provides to target their messages, to the detriment of democracy.

To me, Facebook is perhaps the most worrisome of all the Big Data concerns of the book. It now exercises an incredible amount of influence over what information people see, with this influence sometimes being sold to the highest bidder. Together with Amazon, Google and Apple, our economy and society have become controlled by monopolies to an unparalleled degree, monopolies that monitor our every move. In the context of government surveillance, Edward Snowden remarked that we are now “tagged animals, the primary difference being that we paid for the tags and they’re in our pockets.” A very small number of huge extremely wealthy corporations have even greater access to those tags than the government does, recording every movement, communication with others, and even every train of thought as we interact with the web.

These organizations are just starting to explore how to optimize their use of our tags, and thus of us. Of the students starting classes here today in the math department, our training will allow many of them to go on to careers working for these companies. As they go off to work on the algorithms that will govern the lives of all of us, I hope they’ll start by reading this book and thinking about the issues it raises.

Posted in Book Reviews | 30 Comments

Templeton News

Looking at my list of items to blog about, I see most of them have some relation to the Templeton Foundation, so this will be a blog post just about those. To get some idea of the scale of Templeton’s activities, at the end of 2014 they had about \$3.2 billion in assets, and during 2014 had given away about \$185 million. For comparison, the NSF budget for FY2014 for physics was $267 million and for mathematics \$225 million.

One of the main goals of the foundation is to bring together science and religion. Among the many things they are funding to accomplish this is a \$871,000 grant to Arizona State University to fund Think Write Publish Fellowships in Science and Religion. If you’re a hard-up writer, these people will give you the opportunity to get $10,000 to write “creative nonfiction stories about harmonies between science and religion” and help you get them published.

Over the next few years, as you see things like this make it into the media, realize that this is not evidence of an intellectual trend, but a reflection of Templeton money and their agenda. ASU’s Lawrence Krauss is, for good reason, not happy.

To give an idea of the range of Templeton’s influence, just at ASU they’re funding several other large grants, including $745,000 for Representations of God (this and this), and $544,000 for emergent gravity. When you notice conferences, seminars, public lectures, etc. about “emergent gravity” in coming years, realize that some of them are happening because of Templeton’s agenda (one of the PIs is a Templeton Prize winner).

One of Templeton’s largest recent grants has been \$4.7 million to FQXI for research into “Physics of the Observer”. Among other things, this funded a recent conference at Banff.

A major interest of Templeton’s over the years has been “Genius”. Another of their large recent grants has been to the World Science Foundation for its Cultivating Genius Initiative.

Finally, there will be an interesting mathematics conference related to quantum field theory at Harvard October 8-10. I’ll likely be up in Boston visiting my brother and hope to maybe attend some of the talks. Funding for this is coming partially from the “Templeton Charity Foundation Switzerland”. I guess this is these people, some off-shoot of the Templeton Foundation, with exactly the same interests, They say they have made \$85.2 million in grants, a list is here.

Update
: I was thinking of commenting that Templeton at least seemed to have slowed down its efforts to promote multiverse mania. But then I noticed this. If you want to know why Ira Flatow on NPR keeps bringing up the multiverse, \$150,000 in Templeton money might have something to do with it…

Update: I keep on finding out about more of these Templeton-funded things, they are endless. Templeton is funding an Institute for Cross-Disciplinary Engagement at Dartmouth. Themes to be investigated are “Can science alone explain the nature of reality?”, “Is there free will?” and “Is there purpose in the universe?”. Among their many activities will be an event featuring a dialogue between Sean Carroll and a Buddhist Scholar in San Francisco in February.

Posted in Uncategorized | 20 Comments

Not Ever Wrong

The “SUSY Bet” event in Copenhagen took place today, with video available for a while at this site. It appears to be gone for the moment, will put up a better link if it becomes available. An expensive bottle of cognac was presented by Nima Arkani-Hamed to Poul Damgaard, conceding loss of the bet. On the larger question of the significance of the negative LHC results, a recorded statement by Gerard ‘t Hooft (who had bet against SUSY), and a statement by Stephen Hawking (not in on the bet, but in the audience) claimed that if arguments for SUSY were correct, the LHC should have seen something, so they think nature has spoken and there’s something wrong with the idea.

The losers of the bet who spoke, (Arkani-Hamed, David Gross and David Shih) demonstrated the lesson about science that supersymmetry and superstring theory have taught us: particle theorists backing these ideas won’t give up on them, no matter what. They all took the position that they still weren’t giving up on SUSY, despite losing the bet. In more detail:

  • Arkani-Hamed was not a signatory of the original bet in 2000, but signed on to the later 2011 version. He explained today that at the time he thought chances of SUSY visible early on at the LHC were just 50/50 (with his 2004 work on split SUSY motivated by realizing that pre-LHC the conventional picture of SUSY at the electroweak scale was already ruled out). He attributed his decision to take the pro-SUSY side of a 50/50 bet to “optimism”, implying that this took place at a conference dinner where there may have been too much to drink. In his split-SUSY scenario, SUSY may yet show up at the LHC, or it could even be invisible there, requiring a higher energy accelerator. So, he’s not giving up on SUSY based on LHC results.
  • David Gross also is not giving up, arguing that fine tuning of a factor of 100 or 1000 is not a problem (invoking the large ratios that appear in the fundamental Yukawa couplings). He did say that young people might want to take this as reason to look for new ideas, but, for himself, felt “I’m too old for that”.
  • David Shih isn’t giving up either, arguing that there still was lots of data to come, plenty of room for SUSY to appear at the LHC, still believes we’ll discover SUSY, at the LHC or elsewhere.

One piece of misinformation promoted by several of the speakers was the idea that “everyone” back around 2000 believed in SUSY as the next new physics to be found. In my book (written in 2002-3) I wrote a long section about the evidence against SUSY, and, of course, if you look at the bet under discussion, in 2000 many more people (16 vs. 7) were taking the anti-SUSY vs. pro-SUSY side (at least in Copenhagen, but I think this reflects the general range of opinions).

No one today asked the obvious question “Is there any forseeable experimental data that would cause you to decide that SUSY was an idea that should be abandoned?”. I’m now not seeing any prospect in my lifetime of anything that would cause these or other SUSY proponents to give up (John Ellis has also announced that no matter what the LHC says, he’s not giving up). Unfortunately “Not Ever Wrong” is clearly the slogan of the (minority) segment of the particle theory community that long ago signed up for the vision of fundamental physics in which SUSY plays a critical part.

Update: There’s a blog entry from Natalie Wolchover about this. She has more detail about the final remarks from Gross that I mentioned:

“In the absence of any positive experimental evidence for supersymmetry,” Gross said, “it’s a good time to scare the hell out of the young people in the audience and tell them: ‘Don’t follow your elders. … Go out and look for something new and crazy and powerful and different. Different, especially.’ That’s definitely a good lesson. But I’m too old for that.”

Update: Video of the Copenhagen event is available here.

Update: I happened to be looking at Michael Dine’s 2007 Physics Today article on string theory and the LHC, noticed the side remark that “The Large Hadron Collider will either make a spectacular discovery or rule out supersymmetry entirely.” I wonder if he still thinks this, and whether we’ll ever see Physics Today publishing something updating its readers.

Update: Yet another news story about this, from Science News.

Posted in Uncategorized | 106 Comments

Various News

Before turning to other topics, congratulations to my Columbia colleague Wei Zhang, who was awarded the Gold Medal at the recent ICCM in Beijing.

On the HEP physics front, some news is:

  • On Monday at 1:30pm Danish time, at the conference on Current Themes in High Energy Physics and Cosmology that is part of the Simons Program at the Niels Bohr International Academy, there will be an event adjudicating the bet on SUSY first made in 2000 (described here, the wager is here).

    Nima Arkani-Hamed will act as referee, a bit unconventional since he’s on one side of the bet. On the other hand, it’s clearly the losing one and I have no doubt he’ll concede graciously. Others who may be heard from as part of the event include David Gross and Gerard ‘t Hooft.

    Video streaming should be available at this site, I guess the time is 7:30am New York time, maybe I’ll be up and watch during breakfast…

  • In related news, Frank Wilczek has conceded loss on a similar bet he made back in 2009 with Garrett Lisi, see here.
  • As for other similar bets I’m aware of, Lubos Motl is about to lose his bet with Adam Falkowski, and David Gross should be losing his with Ken Lane within the next year (perhaps he’ll comment on this on Monday). The interesting story to watch here will be whether this changes anyone’s behavior. Will Gross, Wilczek and others continue to point to SUSY extensions of the Standard Model as the promising future of the field, or will they acknowledge that this is an idea that hasn’t worked out?
  • In other HEP news, the “nightmare scenario” seems to have driven physicists at CERN to try human sacrifice to propitiate the angry Gods who are tormenting them in this manner.
  • There’s a conference going on in Banff organized by FQXI, and you can follow it to some extent on Twitter. I’m somewhat of a skeptic about claims to have deep new insights into physics based on what seem like simple, vague natural language arguments. Not clear if the 140 character limit of Twitter is a good or bad thing in trying to capture such arguments.
  • Another of the vague sort of claims I’m dubious about is the “ER=EPR” one described here. On the other hand, this at least appears to have important applications.
  • Also on the dubious HEP news front, there have been lots of news articles recently about a supposed new “Fifth Force”, generated by a press release from UC Irvine promoting a PRL paper by UCI theorists. This story had been debunked a couple months ago by Natalie Wolchover at Quanta (see here).

    The usual hit on science journalism is that the work of scientists is hyped and misrepresented by journalists, but I think this shows an all too common example of the real problem: hype from physicists and their institutions (egged on by PRL), with journalists trying to hold the line against it (and not always succeeding).

  • Finally, there’s a wonderful interview with Edward Witten (part of a TV program in Dutch) that I would guess was filmed in 1999-2000 [maybe 1997?] and is available here. Witten is there quite optimistic about the prospects of string theory, and of course I’m curious whether and how the intervening years have changed his point of view. On other topics the interview is quite fascinating, doing an unusually good job of getting the normally reticent Witten to talk a bit about life and wider issues (as well as demonstrating an admirable refusal to get provoked into pontificating about various questions he’s not expert in).

Update: For a detailed explanation from one of the theorists working on the supposed “fifth force”, see here.

Posted in Uncategorized | 27 Comments

The Nightmare Scenario

Now back from a short vacation, and there seems to have been a lot happening on the debate over fundamental physics front. From the experimentalists, news that the Standard Model continues to resist falsification:

  • At ICHEP, as expected, new data from ATLAS and CMS ruled out the supposed 750 GeV state that would have indicated new physics.
  • Also at ICHEP, significantly stronger bounds on SUSY: gluinos ruled out up to 1.9 TeV, stops up to 900 GeV.

Recall also the recent results from LUX and PandaX (discussed here) putting stronger bounds on the sort of WIMP dark matter supposedly a feature of SUSY models.

In addition, there was news from IceCube ruling out the possibility of certain models of light sterile neutrinos (paper here, a Nature news story here).

For what it all means, you should of course consult Resonaances (and read a profile of Jester, The Rogue Blogger Who Keeps Spoiling Physics’ Biggest News) as well as Tommaso Dorigo (and some coverage from Physics World featuring him).

From an even wider perspective, see Sabine Hossenfelder for The LHC “nightmare scenario” has come true and Natalie Wolchover’s piece on the “nightmare scenario”, What No New Particles Means for Physics (by the way, congratulations to Wolchover on some well-deserved awards, including this one). My own views on this are well known: this was in some sense a major theme of my book, and among other places I’ve written about this, see for example a 2013 Edge essay.

My perspective on this is in some ways similar to Hossenfelder’s, but I draw very different conclusions, strongly disagreeing with her criticism of “reliance on gauge symmetry”, and “trust in beauty and simplicity”. This hasn’t been what theorists have been doing for 30 years. The string theory unification ideology has led to an emphasis on extremely complex and ugly models, with gauge symmetry not a fundamental feature at all. Yes, she’s right to point to “A failure of particle physicists to uncover a more powerful mathematical framework to improve upon the theories we already have”, but I’d argue that that failure is due to an insistence on looking in the wrong place.

Wolchover’s piece captures some of the current angst well, for instance quoting Maria Spiropolu about SUSY as follows:

“We had figured it all out,” said Maria Spiropulu, a particle physicist at the California Institute of Technology and a member of CMS. “If you ask people of my generation, we were almost taught that supersymmetry is there even if we haven’t discovered it. We believed it.”

Arkani-Hamed is as quotable as ever, saying the lesson of all this failure is

“There are many theorists, myself included, who feel that we’re in a totally unique time, where the questions on the table are the really huge, structural ones, not the details of the next particle. We’re very lucky to get to live in a period like this — even if there may not be major, verified progress in our lifetimes.”

I suspect that there are quite a few physicists, of my generation and later, who don’t necessarily feel “very lucky” to get to live in a period of no significant progress in the past 40 years, with the prospect of none in our lifetimes. It would be great if the “huge, structural” questions really were on the table, but I’ve seen little interest among theorists in such questions, beyond whatever the fad of the day related to AdS/CFT might be.

Also finishing up while I was away was the Strings 2016 conference, and for the state of the subject you might want to watch David Gross’s “Vision” talk (video here, I had to download the whole thing to watch it, streaming was unusable). I think Gross was right in pointing to work on the so-called “Sachdev-Ye-Kitaev” model as perhaps the most interesting thing being discussed at the conference. This is an exactly solvable large-N quantum mechanical model exhibiting features of holography. A good place to start learning about it is Kitaev’s KITP lectures here and here.

Gross also went over some history, noting that some current topics in string theory echo back to 1967 work of Mandelstam (and that Gross himself had written a paper at that time on this material, so next year will be a 50th anniversary of his engagement with the subject). As for the current state of the theory, his take is much the same as that he has talked about at many earlier Strings conferences: string theory is now a “framework” encompassing QFT and most of the rest of fundamental physics, but we don’t really know “what string theory is”. To me it’s very unclear why one is supposed to believe so strongly in the overarching role of a set of ideas that aren’t well-defined and have been an utter failure at explaining anything about particle physics. Soon he’ll have to pay off his SUSY bets, but this doesn’t seem to have changed his conviction that superstring theory unification is on the right track. He ended with his usual sign-off “The best is yet to come”, but this time added a parenthetical “I hope it comes quick”.

Another new Wolchover piece well worth reading is about Miranda Cheng and her work on modular forms and string theory on K3 surfaces. I think this sort of work on the boundary of mathematics and physics makes clear the problem with string theory and mathematics: things are complicated and ugly if you try and make the theory look like the real world. You get beautiful ideas and great mathematics when you ignore the supposed connection to the real world and go in another direction. The problem is that often, instead of pursuing good mathematical ideas where they lead, people doing this feel the need to stick to some connection to the failed idea. Here’s what Cheng has to say:

I personally always have the real world at the back of my mind — but really, really, really back. I use it as sort of an inspiration for determining roughly the big directions I’m going in. But my day-to-day research is not aimed at solving the real world. I see it as differences in taste and style and personal capabilities. New ideas are needed in fundamental high-energy physics, and it’s hard to say where those new ideas will come from. Understanding the basic, fundamental structures of string theory is needed and helpful. You’ve got to start somewhere where you can compute things, and that leads, often, to very mathematical corners. The payoff to understanding the real world might be really long term, but that’s necessary at this stage.

For another physicist still enthusiastic about string theory, see this interview with Brian Greene, somehow motivated by a sci-fi horror series, Netflix’s “Stranger Things”.

Posted in Uncategorized | 42 Comments

HEP Physics News

  • ICHEP 2016 starts in Chicago this week. Talks about the new diphoton results are scheduled for 9am (Chicago time) Friday. There will also be talks later in the day at CERN (5pm Geneva time), scheduled as part of this summer’s TH institute. Consulting my prediction here (although I had the plenary vs. parallel wrong), I think it’s very clear that these will be negative results for the supposed 750 GeV bump. See Resonaances for discussion of the significance of this.
  • Last year I heard Nima Arkani-Hamed talk here about “Nnaturalness”. The paper is now out.
  • There’s a very interesting profile in Nautilus of Fotini Markopolou, who left theoretical physics to work on a startup in England.
  • At Quanta magazine, Natalie Wolchover has a nice article about intriguing new neutrino results. With nothing unexpected showing up at the energy frontier being explored by the LHC, in coming years it may very well be the neutrino sector, which can be explored at much lower energies, where one should look for something new.
  • Strings 2016 is starting in Beijing in a few hours. Schedule here, talk titles here.

Update: Videos of the Strings 2016 talks are becoming available here. The talks of the first day featured little string theory (except for a historical talk by John Schwarz). A major theme was 3d quantum field theory, with Witten and Costello talking about the same new ideas starting with Chern-Simons theory, and Seiberg talking about dualities in 3d qft.

Update: Slides are here. Tuesday started off with entanglement entropy and tensor networks. I confess that not only do I not see what this has to do with string theory, but have trouble seeing what it has to do with anything. The connection with quantum gravity using tensor networks seems vaguely to recall the much more well-motivated spin networks of LQG, which string theorists always denounced as not exhibiting a Lorentz invariant ground state. Now it’s the string theorists promoting this kind of network structure, noticing the same problem they used to denounce as convincing evidence their competitors had it all wrong. The world is a strange place.

Update: New Scientist has a quote from me saying the obvious thing about Nnaturalness. Blogging likely light to nonexistent for the next week, I’ll be traveling, listening to music in Nashville, as well as up in the mountains of Virginia/Tennessee,

Posted in Strings 2XXX, Uncategorized | 19 Comments

Monumental Proof to Torment Mathematicians for Years to Come

Davide Castelvecchi at Nature has talked to some of the mathematicians at the recent Kyoto workshop on Mochizuki’s proposed proof of the abc conjecture, and written up a summary under the appropriate title Monumental proof to torment mathematicians for years to come. Here’s the part that summarizes the opinions of some of the experts there:

Mochizuki is “less isolated than he was before the process got started”, says Kiran Kedlaya, a number theorist at the University of California, San Diego. Although at first Mochizuki’s papers, which stretch over more than 500 pages1–4, seemed like an impenetrable jungle of formulae, experts have slowly discerned a strategy in the proof that the papers describe, and have been able to zero in on particular passages that seem crucial, he says.

Jeffrey Lagarias, a number theorist at the University of Michigan in Ann Arbor, says that he got far enough to see that Mochizuki’s work is worth the effort. “It has some revolutionary new ideas,” he says.

Still, Kedlaya says that the more he delves into the proof, the longer he thinks it will take to reach a consensus on whether it is correct. He used to think that the issue would be resolved perhaps by 2017. “Now I’m thinking at least three years from now.”

Others are even less optimistic. “The constructions are generally clear, and many of the arguments could be followed to some extent, but the overarching strategy remains totally elusive for me,” says mathematician Vesselin Dimitrov of Yale University in New Haven, Connecticut. “Add to this the heavy, unprecedentedly indigestible notation: these papers are unlike anything that has ever appeared in the mathematical literature.”

Kedlaya’s opinion is the one likely to carry most weight in the math community, since he’s a prominent and well-respected expert in this field. Lagarias has a background in somewhat different areas, not in arithmetic algebraic geometry, and Dimitrov I believe is still a Ph.D. student (at Yale, with Goncharov as thesis advisor).

My impression based on this and from what I’ve heard elsewhere is that the Kyoto workshop was more successful than last year’s one at Oxford, perhaps largely because of Mochizuki’s direct participation. Unfortunately it seems that we’re still not at the point where others besides Mochizuki have enough understanding of his ideas to convincingly check them, with Kedlaya’s “at least three years” justifying well the title of the Nature piece.

Organizer Ivan Fesenko has a much more upbeat take here, although I wonder about the Vojta quote “now the theorem proved by someone in the audience” and whether that refers to Mochizuki’s IUT proof of the Vojta conjecture over number fields (which implies abc), or the Vojta conjecture over complex function fields (such as in Theorem 9 of the 2004 paper http://www.kurims.kyoto-u.ac.jp/preprint/file/RIMS1413.pdf), or something else. The reference to Dimitrov as discussing “applications of IUT” might be better worded as “would-be applications of IUT”.

There will be a conference at the University of Vermont in September, billed as “An introduction to concepts involved in Mochizuki’s work on the ABC conjecture, intended for non-experts.”

Update: Fesenko has updated his report on the conference (see here) to include a more accurate characterization of talks by Vojta and Dimitrov (you can see changes to that report here). Between this and the Nature quotes, there seems to be a consensus among the experts quoted (Kedlaya, Dimitrov, Vojta, Lagarias) that they still don’t understand the IUT material well enough to judge whether it will provide a proof of abc or not. Unfortunately it still seems that Mochizuki is the one person with a detailed grasp of the proof and how it works. I hope people will continue to encourage him to write this up in a way that will help these experts follow the details and see if they can come to a conclusion about the proof, in less than Kedlaya’s “at least three years”.

Update: New Scientist has a piece about this which, as in its typical physics coverage, distinguishes itself from Nature by throwing caution to the wind. It quotes Fesenko as follows:

I expect that at least 100 of the most important open problems in number theory will be solved using Mochizuki’s theory and further development.

Fesenko also claims that “At least 10 people now understand the theory in detail”, although no word who they are (besides Mochizuki) and why if they understand the theory in detail they are having such trouble explaining it to others, such as the experts quoted in the Nature article. He also claims that

the IUT papers have almost passed peer review so should be officially published in a journal in the next year or so. That will likely change the attitude of people who have previously been hostile towards Mochizuki’s work, says Fesenko. “Mathematicians are very conservative people, and they follow the traditions. When papers are published, that’s it.”

I think Fesenko here seriously misrepresents the way mathematics works. It’s not that mathematicians are very conservative and devoted to following tradition. The ethos of the field is that it’s not a proof until it’s written down (or presented in a talk or less formal discussion) in such a way that, if you have the proper background, you can read it for yourself, follow the argument, and understand why the claim is true. Unfortunately this is not yet the case, as experts have not been able to completely follow the argument.

If it is true that a Japanese journal will publish the IUT papers as is, with Mochizuki and Fesenko then demanding that the math community must accept that this is a correct argument, even though experts don’t understand it, that will create a truly unfortunate situation. Refereeing is usually conducted anonymously, shielding that process from any examination. Lagarias gives some indication of the problem:

It is likely that the IUT papers will be published in a Japanese journal, says Fesenko, as Mochizuki’s previous work has been. That may affect its reception by the wider community. “Certainly which journal they are published in will have something to do with how the math community reacts,” says Lagarias.

While refereeing of typical math papers can be rather slipshod, standards have traditionally been higher for results of great importance like this one. A good example is the Wiles proof of Fermat, which was submitted to Annals of Mathematics, after which a team of experts went to work on it. One of these experts, Nick Katz, finally identified a subtle flaw in the argument (the proof was later completed with the help of Richard Taylor). Is the refereeing by the Japanese journal being done at this level of competence, one that would identify the sort of flaw that Katz found? That’s the question people will be asking.

In some sense the refereeing process for these papers has already been problematic. A paper is supposed to be not just free of mistakes, but also written in a way that others can understand. Arguably any referee of these papers should have begun by insisting that the author rewrite them first to address the expository problems experts have identified.

Update: Fesenko is not happy with the Nature article, see his comment here.

Posted in abc Conjecture | 46 Comments

Quantum Theory and Representation Theory, the Book

For the last few years most of my time has been spent working on writing a textbook, with the current title Quantum Theory, Groups and Representations: An Introduction. The book is based on a year-long course that I’ve taught twice, based on the concept of starting out assuming little but calculus and linear algebra, and developing simultaneously basic ideas about quantum mechanics and representation theory. The first half of the course stuck to basic non-relativistic quantum mechanics, while the second introduced free quantum field theories and the relativistic case. By the end, the idea is to bring the reader to the point of having some appreciation of the main elements of the Standard Model, from a perspective emphasizing the representation theory structures that appear.

The discussion of quantum field theory I think is rather different than that of other textbooks, taking a Hamiltonian point of view, rather than the Lagrangian/path integral one in which most physicists are now trained (myself included). One basic idea was to try and work out very carefully the quantization of a finite-dimensional phase space in all its representation-theoretic glory, with the idea that free quantum field theories could then be developed as a straightforward extension to the case of taking solutions of a field equation as phase space. While this point of view on quantum field theory is fairly well-known, writing up the details turned out to be a lot more challenging than I expected.

As part of this, the book attempts to carefully distinguish mathematical objects that usually get identified by physicist’s calculational methods. In particular, phase space and its dual space are distinguished, and the role of complex numbers and complexification of real vector spaces receives a lot of attention.

At the same time, the book is based on a relatively simple philosophical take on what the fundamental structures are and how they are grounded in representation theory. From this point of view, free relativistic quantum field theories are based on starting with an identification of irreducible representations of the space-time symmetry group using the Casimir operator to get a wave-equation. This provides a single-particle theory, with the quantum field theory then appearing as its “second quantization”, which is a metaplectic (bosonic case) or spinor (fermionic case) representation. These are some of the specifics behind the grandiose point of view on how mathematics and physics are related that I described here. For some indications of further ideas needed to capture other aspects of the Standard Model, there’s this that I wrote long ago, but which now seems to me hopelessly naive, in need of a complete rethinking in light of much of what I’ve learned since then.

The manuscript is still not quite finished, and comments are extremely welcome. While several people have already been very helpful with this, few have been willing to face the latter chapters, which I fear are quite challenging and in need of advice about how to make them less so. The current state of things is that what remains to be done is

  • A bit more work on the last chapters.
  • A rereading from the start, bringing earlier chapters in line with choices that I made later, and addressing a long list of comments that a few people have given me.
  • An old list of problems (see here) needs to be edited, with more problems added.
  • I need to find someone to make professional drawings (there are some funds for this).
  • An index is needed.

Optimistically I’m hoping to have this mostly done by the end of the summer. Springer will be publishing the book (my contract with them specifies that I can make a draft version of the book freely available on my web-site), and I assume it will appear next year. To be honest, I’m getting very tired of this project, and looking forward to pursuing new ideas and thinking about something different this fall.

Posted in Quantum Theory: The Book | 13 Comments

WIMPs on Death Row

One of the main arguments given for the idea of supersymmetric extensions of the standard model has been what SUSY enthusiasts call the “WIMP Miracle” (WIMP=Weakly Interacting Massive Particle). This is the claim that such SUSY models include a stable very massive weakly interacting particle that could provide an explanation for dark matter.

According to the “WIMP Miracle”, evidence for such a particle is supposed to show up as they get produced at the LHC, and at underground detectors designed to look for ones traveling through the earth. Like all other predicted SUSY particles, no evidence for such a thing has appeared at the LHC. A sequence of more and more sensitive underground experiments has also come up empty.

One of the latest of these, LUX, announced results today, see here, press release here. These are the results from the final 20 month run of the LUX detector and they are conclusively negative: no candidate events were seen, putting four times smaller bounds on the cross-section for any such particle. New Scientist has it right I think, with a story headlined Dark matter no-show puts favoured particles on death row. Ethan Siegel has a very good article, Dark Matter May Be Completely Invisible, Concludes World’s Most Sensitive Search, which includes:

The null detection is incredible, with a fantastic slew of implications:

  1. Dark matter is most likely not made up, 100%, of the most commonly thought-of WIMP candidates.
  2. It is highly unlikely that whatever dark matter is, in light of the LUX results, will be produced at the LHC.
  3. And it is quite likely that dark matter lies outside of the standard mass range, either much lower (as with axions or sterile neutrinos) or much higher (as with WIMPzillas).

Enthusiasts are not likely to give up so easily though, with Sean Carroll tweeting that the news is only “we’re not seeing it yet, stay tuned.” Not sure what one is supposed to stay tuned to, this is pretty much a final result from LUX. There will be a next generation experiment, LZ, but that’s for after 2020. There are other competing experiments now operating, including Xenon1T, now being commissioned, which will be somewhat more sensitive. There seems to be no serious reason though to expect WIMPs to appear at somewhat lower cross-sections if they haven’t appeared yet.

With SUSY and the “WIMP miracle” now dead ideas, perhaps that will lead to focus on more promising ones. There is still a great deal that we don’t understand about neutrinos. A few days ago I saw this intriguing news about the PTOLEMY project, which I hadn’t heard about before.

Posted in Experimental HEP News | 52 Comments