The timing for release of long-awaited Planck polarization data keeps getting pushed back. At one point it was supposed to be earlier this year, most recently it was supposed to be this month, with that timing forced by a conference devoted to discussion of the results planned for December 1-5. The website for that conference now says:
The 2014 Planck public release of data products and papers will actually take place a few weeks after this conference. This conference is therefore the first occasion to preview the Planck 2014 data products and discuss their scientific impact. The presentations will be videocast online. After the conference, the presentation slides will be made available.
Another conference scheduled assuming the data will have been released is this one in Paris December 15-19.
The Planck website now reports:
– The data products and scientific results will be presented at a public conference in Ferrara. The presentations will be videocast during the conference and slides will be made available after the end of the conference.
– It is planned to release all major data products and scientific papers to the public before the end of 2014. A few of the derived products (e.g. the Likelihood code) will need a little more time to be readied for release, but will be made public within the month of January 2015.
David Spergel on Twitter reports December 22 as the date for release of papers and data.
It will be interesting to see how the cosmology community deals with the situation of no papers or data, just videocast and slides, from December 1 on. From similar situations in the past, some people have highly developed technology for scraping data off slides, presumably that will be in high demand.
Given that one informally hears from people in the collaboration that it is a desperate rush to get these papers written in time, the question arises as to how trustworthy and nailed-on the results will be. e.g even the WMAP-9 paper, with much more time on a well-understood instrument, had in its original version a significant error in the headline numbers in the abstract.
Oh dear, so it looks like the 700 million euro Planck project won’t be able to rule out BICEP2’s conclusions after all. Christmas time release of disappointing news is an old trick.
Everyone expects there to be a mad dash for whatever sort of data that can be gleamed from presentation slides. I am more curious what the Planck collaboration is doing to try to manage the free-for-all.
With seemingly reliable techniques to estimate the uncertainties inherent in screen-scraped maps, I hope we won’t see a replay of previous silliness. If Planck doesn’t come out with its own preliminary parameter estimates, someone else definitely will (poorly done or not) using their now public data. So I expect some actual numbers from the first December conference and not just a flashing of maps on the screen for a second or two.
Or the entire presentation could be devoid of images, which would be a hilarious tease to the model building theorists.
I think with Planck’s high resolution screen-scraping is less valuable, and the current excitement is around B-modes which you will never get in any believable way by screen-scraping a polarization map. I imagine the main delay of releasing maps is precisely because they know full well that the instant they’re out people will be running their own analysis code and publishing power spectra and cosmological parameter estimation.
In any case, it sounds like their plan is to release the maps and basic parameter results at the same time. Likelihood code gets released a little later mostly because that’s a pretty big chunk of software & data that has to be cleaned up once the analysis chain has settled down.
Ben Gold: it doesn’t seem to make much sense releasing parameter results and model constraints if the likelihood code is not settled. Probably only those non-controversial results will be released where a later change in the likelihood code is unlikely to result in a significant change to parameter fits. Which means there won’t be much new to learn from Ferrara.
Sesh: I meant more that the likelihood code release usually happens after running -all- chains, including the weird stuff like simultaneously allowing tensor running, variable neutrino number, isocurvature modes, etc. Some of those chains can take a long time to converge, and sometimes involve changes to the code which are insignificant unless you’re one of the three people worldwide interested in that particular model. But they’re totally unnecessary for a “basic results” paper.
That doesn’t mean only non-controversial stuff can be released before then, since there could be something new that’s demonstrably robust against other weird parameters (say, n_S running) without requiring lengthy chains of every possible parameter combination.
Peter, as always thank you very much for the comprehensive coverage! Interestingly, the following statement is now missing on the conference page: “The presentations will be videocast online. After the conference, the presentation slides will be made available.”
Maybe they changed their mind to prevent the problems you alluded to (screen-scraping etc) ??