Just time at the moment for some quick links. I’ll start with some math news, since there hasn’t been much of that here recently:
- Matt Baker has the sad news here of the death of Berkeley mathematician Robert Coleman recently, at the age of 59. Coleman was a leader in the field of p-adic geometry, and managed to continuously do important research work despite a long struggle with MS. He was both highly influential and well-loved, be sure to read the comments which contain appreciations from many different mathematicians.
- Also at Matt Baker’s blog is a summary of recent work by Manjul Bhargava and collaborators on the average ranks of elliptic curves. This work shows
at least 20.6% of elliptic curves over Q have rank 0, at least 83.75% have rank at most 1, and the average rank is at most 0.885…
and
at least 66.48% of elliptic curves overQ satisfy the (rank part of the) Birch and Swinnerton-Dyer (BSD) Conjecture (and have finite Shafarevich-Tate group)
Conjecturally
50% of elliptic curves have rank 0, 50% have rank 1, and 0% have rank bigger than 1, and thus the average rank should be 0.5. (And conjecturally, 100% of elliptic curves satisfy the BSD conjecture. :))
Until recently
the best known unconditional results in this direction were that at least 0% of elliptic curves have rank 0, at least 0% have rank 1, the average rank is at most infinity, and at least 0% of curves satisfy the BSD conjecture.
so this is dramatic progress.
- I’ve yet to hear any solid rumors about who will win the 2014 Fields Medals, to be announced at the ICM in August. To my mind, Bhargava is a leading candidate. Others one hears discussed are Jacob Lurie (a question is whether he has a big enough theorem, the one he’s talking about here may not be finished). Often mentioned names whose work I know nothing about are Artur Avila and Maryam Mirzakhani.
- Another area of huge progress in mathematics over the past year or so has been work of Peter Scholze, who is another excellent Fields Medal candidate, but one young enough that this might get put off until 2018. I’ve been hoping to understand his results on torsion classes in Langlands theory well enough to say something sensible here, but I’m definitely not there yet, maybe some day in the future. In the meantime, watch his extremely clear lectures at the IAS (here, here and here) as well as the talks at this recent MSRI workshop.
- The math community award structure is for some reason prejudiced against the middle-aged, with the high-profile prizes going to the young (Fields Medals) and the old (Abel Prize). This year’s Abel Prize went to Yakov Sinai, and again, I’m in no position to explain his work. However Jordan Ellenberg was, and there’s video here of the prize announcement, including Ellenberg’s talk about Sinai’s work. In the past Timothy Gowers gave such talks, with not everyone happy about this. No news yet on whether Sowa will change his blog name to Stop Jordan Ellenberg! !!!.
- Leila Schneps is trying to raise funds for an English translation of the 3rd volume of Winfried Scharlau’s German language biography of Grothendieck. Go here to contribute, I just did.
Turning to physics news:
- the recent BICEP2 data is still attracting a lot of attention. Initial news stories were often dominated by nonsense about the multiverse, more recent ones are more sensible, including Adrian Cho at Science Magazine, Clara Moskowitz at Scientific American, and a George Musser interview with Gabriele Veneziano.Yesterday Perimeter hosted a workshop on implications of BICEP2. The theory talks I looked at didn’t seem to me to have much convincing in them, except that Neil Turok acknowledges that this kills off the Ekpyrotic (bouncing brane) Universe and he’s paying off a $200 bet. For some reason, Nima Arkani-Hamed now seems to speak at every single fundamental physics meeting, so was also at this one. More interesting were the experimental talks, with new data soon on its way, including Planck results planned for October, possibly measuring r to +/-.01 (BICEP2 says r=.20).
- For some perspective on inflationary theory, CU Phil in recent comment section points to a new volume on the Philosophy of Cosmology. It includes some great articles putting multiverse mania in context by George Ellis and Helge Kragh, as well as an enlightening discussion of the issues surrounding inflationary theory, especially “Eternal Inflation”, from Chris Smeenk.
- For the latest news about LHC results coming in from the Run 1 dataset, see this report from the Moriond conference.
- Finally, physics continues to inspire frightening movie projects, see here.
“possibly measuring r to +/-0.1 (BICEP2 says r=.20)”
does this mean r might be (if this rumor is true) 0.00+/-.01 or is this just a comment on Plancks sensitivity being +/-0.1? thx for answering.
katzee,
I think one of the speakers from Planck showed a plot of projected sensitivity, showing the possibility of measuring r=0.05 to +/- .o1, and I’m just assuming the accuracy number is similar for possibly larger r as claimed by BICEP2. This number does not claim to include systematics, so the likely Planck number will not be anything this accurate. It sure sounded though that if BICEP2 is right about r=.2, Planck should definitely see this.
Peter:
The official statement from the Planck team is that if the BICEP2 signal is due to inflationary gravity waves, Planck has the sensitivity to see it, but that it is not clear whether they will be able to treat the foregrounds well enough to disentangle the cosmological signal. So at the moment, Planck is not claiming that they will be able to either confirm or rule out the BICEP2 result.
Skysy
If they can’t then it will be highly embarrassing, for the cost of the project, to not enable a better measurement than BICEP2. Sure they’re doing the whole sky and they have also focused on instensity measurements as well as polarization, but come on, their equipment is in Space – at very high expense.
And why has the result been so delayed – is it to prevent a Nobel award this year to the rival team? ( lol ) – originally Planck said polarization results would be in early 2014, now we getting quoted October and even November – will they actually publish by Christmas? I hope Planck won’t be another Gravity Probe B, where publication of the results was delayed for ages and eventually not completely compelling.
JG gravity probe b got delayed by 40 years . When it was launched, no one really
cared one way or the other about the result.
JG:
The official date for releasing Planck polarisation results is October. The hard deadline is December, when there will be two conferences about the Planck results.
Planck was never optimised for polarisation, neither from the point of view of the detectors nor of the (obviously related) sky scanning strategy. Planck was designed to give the best possible measurement (within cosmic variance limits) of the temperature anisotropy of the CMB. They have delivered this, as planned.
I don’t know why Planck was originally not designed with more view towards polarisation (remember that the satellite was designed as a competitor, not as a successor to WMAP), but one reason may have been that it wasn’t expected that there would be anything interesting to see at this level. (Even on the website of the South Pole Telescope, which saw B-modes from lensing in July 2013, it is noted that there isn’t much reason to expect inflationary B-modes with an amplitude higher than the B-modes from lensing except “in the most optimistic inflationary models”.)
Also, treating the foregrounds for the full sky is a lot more difficult than for the relatively clean patch chosen by BICEP2, and this has been the main reason for the delay in the release of the polarisation data.
Note that confirming the BICEP2 data will require Planck measurements of the foregrounds in the sky patch BICEP2 looks at. The BICEP2 team notes in their paper that the main uncertainty in the foregrounds is the lack of a polarised dust map, to be delivered by Planck. (Though the foregrounds are expected to be clearly smaller than the signal they see.)
Thanks Syksy,
I was going by the Douglas Scott talk at Perimeter, see
http://pirsa.org/displayFlash.php?id=14040122
At around 21 min he shows a plot of simulated data for r=.05, from some sort of semi-public not well known Planck document and that’s where the +/- .o1 number comes from. He does explicitly say this is for no foreground. Maybe I’m over-interpreting, but it seems like he wouldn’t be showing that plot and devoting a lot of time to it if it didn’t have some relation to what they might have. If they were getting killed by the foreground, I’d expect more a talk about how difficult the foreground problem is (that wasn’t something he emphasized).
Thanks Syksy (sorry for mispelling your name above)
yes, I understand that one main contribution of Planck’s polarization data will be an accurate dust map to enable BICEP2 (and other experiments) to improve their data analysis.
I wonder if Planck discovers a much more accurate value of r than BICEP2’s (which is eventually confirmed) – who will take more credit historically – how far away from 0.2 will the final experimentally confirmed value have to be for BICEP2 to lose significant credit?
There’s nothing embarrassing. The Bicep2 result actually requires Planck results in order to work, you can look up their probability contours, they say “Planck + Bicep2”, not “Bicep2”.
Aside from that, Planck has measured many other things. It’s lowered the value of Hubble’s constant and of the dark energy density, and it’s giving us more precise maps of interstellar emission.
Penrose expresses skepticism, not about BICEP2 results, but about their interpretation as evidence for the Big Bang. http://www.sciencefriday.com/segment/04/04/2014/sir-roger-penrose-cosmic-inflation-is-fantasy.html
My understanding is that the main problem for Planck is not foregrounds but the scanning strategy. To measure polarisation you need ideally to have two otherwise identical detectors sensitive to orthogonal polarisations pointed at the same part of the sky, or to rotate a single detector through 90 degrees and re-image. Planck attempts the second option by changing the detector angle for successive scans of the same sky region.
Unfortunately their scanning strategy is such that for some sections of the sky the scans are not performed with a full 90 degree rotation but something less than that (apparently this had something to do with this strategy optimising the rate of data transfer back to earth, which was necessary for the high-precision temperature measurements). So for those sections their ability to decompose the polarisation components is limited. Obviously this impacts their ability to extract the large-l polarisation signal, which is unfortunately where the gravitational wave signature will be.
In terms of foregrounds I would have thought Planck should actually be better placed than BICEP, because it observes the sky at so many different frequencies, and therefore is better able to determine what the actual foreground is (BICEP rely on assumptions about the foreground).
Peter:
I checked that part of the presentation, and I don’t think there’s anything there other than the official Planck position: they have the sensitivity. The Planck polarisation analysis is ongoing, showing a simulated plot like that doesn’t imply that this is what they’re actually seeing.
Sesh:
The scanning strategy affects the sensitivity, but according to Planck, sensitivity is not the problem. Or have I misunderstood something?
Seth, that explains it very clearly.
Does anyone know if the delay in publication of the polarization map has been influenced by BICEP2, or was the delay announced previously?
Note that the original release date was stated as “early 2014” in March 2013 after the release of the temperature map, see Notes for Editors
“The next set of cosmology data will be released in early 2014”
I mean Sesh 🙂
“Next set of cosmology data” does not necessarily imply results on all conceivable measurable parameters, does it?
It will be curious to see if their value of Hubble’s constant remains 9% smaller than the value determined from standard candles.
Syksy:
I don’t know for sure, obviously, but I was extrapolating from the fact that the reason Planck have not already released their polarization data is because they are having a hard time getting rid of systematic effects at large-l. The effect I mentioned above would affect large-l reconstructions, so I put two and two together and guessed this would be the main problem.
Since then I’ve looked again at this presentation which suggests again that the problems are to do with systematics at large-l. Depending on how exactly the “gain variation”, “bandpass missmatch” and “calibration mismatch” arise perhaps my earlier simple explanation was wrong – but it does appear to be something to do with the scanning strategy that is causing leakage from temperature to polarization at large scales.
Note also that even BICEP have to beat down T->B leakage by a factor of ~100 before they can extract a BB signal. Since Planck have seven different frequency bands and can already remove foregrounds so well from the temperature maps it just seems unlikely to me that foreground removal is the main issue.
Sesh,
I doubt that foregrounds due to astronomical dust are calibrated at the most ideal precision. If it were the Planck maps would be able to use the entire sky in their analysis, instead they still need to mask parts of the Galactic plane even with their seven bandpasses. Away from the plane, the error will still be present, though smaller by a significant factor.
Further, another astrophysical foreground (background?) to worry about is weak lensing distortions. That I know much less about.
Sure, I understand that. But Planck sees the part of the sky that Bicep saw as well as the galactic plane and all other regions. So if the Bicep result is not due to foregrounds, foregrounds should not prevent Planck from confirming it using at least that patch of sky. Conversely, if Planck discovers foregrounds are unaccountably large even on that patch of the sky, then Bicep didn’t really have a detection after all. So I still don’t see how the foreground problem can be worse for Planck than Bicep.
A couple of comments.
As some members of the BICEP team are also on Planck, it seems highly unlikely that whatever the official status of published foreground maps, the BICEP team do not have some idea of what the foregrounds are going to be. And no-one I have heard from Planck has said that foregrounds are their principal worry about the BICEP result (leakage of E -> B is what I hear more commonly).
@David: Planck does not measure H_0. They measure the CMB, and by extrapolating the CMB forward here using LambdaCDM they make a prediction for the value of the Hubble constant in the local universe. But this is a model-dependent prediction, not a measurement.
For example, with an extension of rLambdaCDM to include Neff, both the Planck-BICEP tension on r and the Planck-H0 tension go away.
Sesh,
Planck observes the same patch as BICEP, but they do not observe that patch with the same sensitivity as BICEP. Planck effectively gains sensitivity by measuring many BICEP-sized patches across the sky – but many of those are much more heavily contaminated by foregrounds.
Also, removing foregrounds from temperature maps is much easier in many ways than in polarization – the signal/foreground ratio is higher, for one. Polarization is a (pseudo)vector field, so there’s more information there than in temperature maps, thus the temperature maps tend not to make very good templates, either.
Having seen some of these guys in the flesh at MIT’s big evening public lecture last Thursday, I should briefly report on the event. Little mention of the “multiverse,” BTW. The evening’s MC, the MIT theorist Ed Farhi, specifically banned talk of multiverses until the question session after the last talk, which I couldn’t stay for. Here’s the flyer:
http://lsc.mit.edu/schedule/2014.2q/desc-physics.shtml
The speakers were Guth (inflation theory), Hughes (gravitational waves), Kovac (BICEP2 co-lead), and Tegmark (general implications). All the talks were excellent, especially Guth’s. I had forgotten what a great speaker he is. He used no equations and barely any jargon beyond what undergraduate science students could understand. His only real moment of weirdness was talking about that funny inflationary equation of state, p = -rho, leading to gravitational repulsion.
Kovac’s summary of the BICEP2 instrument, methods, and results was impressive and should silence most of the skeptics and critics. While some question remains open about the foreground corrections, the checks they put their experiment
through were exceptional. More to come with BICEP3, all part of the Keck program.
(This is one of those experiments where it’s *foregrounds* you want to eliminate, not backgrounds.)
Tegmark gave a nice summary, including the retro-surprise of how the BICEP2 results, with the inflationary scale at 2*10^16 GeV, take us back to the original and simplest inflationary models of the early 1980s, based on GUTs. He showed in passing just how many inflation proposals are ruled out, including all known string versions. String-inspired inflation has a hard time getting the inflation/ Planck hierarchy to be 1/600 or whatever and getting “r” (tensor/scalar) to be larger than 10^(-4). Only string-inspired axion monodromy is larger, but at 10^(-3), still too small. While Tegmark didn’t dwell on it, things look bad indeed for strings.
Given the quality of BICEP2’s measurement, it’s unlikely that they’re simply wrong relative to Planck. The polarization limit inferred by the Planck team is an indirect upper limit, with some mild model dependency. For my money, I would bet that Planck’s interpretation is just off. As a number of commenters here have mentioned, it’s not an instrument designed for polarization measurements.
If Tegmark said that about string theory, he is incorrect.
He and others should be more careful in their claims.
Axion monodromy in string theory gives r in a range from order 10^{-2} to over .1. It does not give r of order 10^{-3}. Where do you get that?
There is on the other hand no direct connection to GUT physics, which has less evidence going for it than inflation at this point.
David, that’s a strange thing to say: “Next set of cosmology data” does not necessarily imply results on all conceivable measurable parameters, does it?
The statement is from Planck’s notes for editors around the world who may not all be scientifically literate to understand “polarization map” (or be concerned with the details) – of course they meant that the polarization data was due for release “early 2014” – the interpretation of the statement is unambiguous.
Anyway I don’t want to make a big deal about it, just wondered if the BICEP2 announcement came as a bit of a shock to the Planck team and caused the delay to publication of their results to October.
As for Tegmark claiming r=0.2 is bad for String Theory – well a bit strong perhaps (ask Eva Silverstein)) but we all know what the String people would be saying if BICEP2 had ruled out a high value of r – they would be no doubt be claiming another “prediction” of ST.
Tegmark didn’t say it so strongly, just mentioning various theories in passing, that the simplest potentials, with the right hierarchy of scales, were favored and “others” were disfavored. He had a table up that I almost missed and didn’t get a good look at.
The axion monodromy is the only string-based inflation model that comes close, at least that I’m aware of. The original model predicted “r” about 10^(-3), but as high as 10^(-2,-1) is possible from some more recent papers. It’s stretch to get it to match the BICEP2 result.
Generally, the BICEP2 result implies a large field excursion ~few x Mplanck and vacuum energy scale ~10^(16) GeV. The latter scale is dead-on for GUTs.
Some more digging: watch Will Kinney’s Perimeter talk from March:
http://pirsa.org/displayFlash.php?id=14030116
At about 57 min, he mentions the more recent “large r” axion monodromy, but points out that it’s strongly disfavored by the combination of BICEP2 and other data. Other string-based models give much smaller “r” values, even less in agreement with BICEP2 and other data. Kinney stresses the critical empirical pressure that high “r” places on theories (and theorists). He likes the “shift symmetry” models, of which axion monodromy is one (which is why it comes close to working).
The really important thing for the next BICEP measurement is not agreement with Planck, but getting a broader range of multipoles.
On a slightly different topic in your post, Peter:
The volume you mention on the philosophy of cosmology looks very interesting, lots of no-nonsense physicky papers from physicists like George Ellis. The volume is actually Issue 44 of the journal ‘Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics’ . This journal generally features contributions from philosophers and historians more than physicists, as far as I’m aware, but there’s some serious reading in Issue 44, thanks for that…
Regarding an old post on this blog about the film “The Principle”
http://www.math.columbia.edu/~woit/wordpress/?p=6624
we now have news, that many of the participants feel that they were duped into participating in it, including Star Trek’s Kate Mulgrew.
http://thinkprogress.org/culture/2014/04/08/3424505/kate-mulgrew-duped/
Lawrence Krauss on Slate.com:
http://www.slate.com/blogs/future_tense/2014/04/08/lawrence_krauss_on_ending_up_in_the_geocentricism_documentary_the_principle.html
“These ______ arguments are about as unscientific as things get.” Devil’s advocate but as idiotic as geocentrism is, still more scientific than MUH. Geocentrism has a clear hypothesis. It makes predictions. It’s been tested innumerable times. It’s been falsified innumerable times.
Please, enough about geocentrism. This was all discussed ad nauseam here
http://www.math.columbia.edu/~woit/wordpress/?p=6624
back in January.
The one mystery here is that of how physicists ended up agreeing to be filmed. This is definitely the most impressive story of physicists being taken in by filmmakers since string theorist Clifford Johnson was induced to appear on a program devoted to “How Big do Boobs Have to be to Crush a Beer Can?”, see
http://www.spike.com/video-clips/9z58hb/manswers-beer-crushing-boobs
In regards to the BICEP2 result, did anybody else catch the article published yesterday in New Scientist titled “Star dust casts doubt on recent big bang wave result“? The cited paper (see arXiv:1404.1899), upon which the article is based, was submitted to arXiv this past April 7th. It seems that the main cause for concern is the lack of inclusion, in BICEP’s foreground analysis, of the possible contamination of the polarization background produced by this dust inspired signal, and that this signal would incorrectly be attributed to primordial gravitational waves from inflation; i.e., that the patch of sky used by the BICEP2’s telescope isn’t quite as clean as once thought. (See, for example, the “Conclusion” section of the above paper.) Anyway, the BICEP2 team has already admitted that the contribution by dust (a foreground component) to the polarization signal is one of the main caveats to their measurement. As always, time will tell.
As an addendum to my post above, I just noticed that “Physics World” did an April 10th article on this very same story as well, which included a lot more info; see “Have galactic ‘radio loops’ been mistaken for B-mode polarization?” The story was also talked about on the blog “In The Dark”; see “Galactic Loops as Sources of Polarized Emissions“. Even though it is hard to see how these “loops” wouldn’t have an impact on BICEP2’s result, it would probably be best to just keep an open mind, and see how this all shakes out after Planck’s release later this year.