Sunday, 25 May 2014

Weekend plot: BICEP limits on tensor modes

 The insurgency gathers pace. This weekend we contemplate a plot from the recent paper of Michael Mortonson and Uroš Seljak:

It shows the parameter space of inflationary models in the plane of the spectral index ns vs. the tensor-to-scalar ratio r. The yellow region is derived from Planck CMB temperature and WMAP polarization data, while the purple regions combine those with the BICEP2 data. Including BICEP gives a stronger constraint on the tensor modes, rather than a detection of r≠0.

The limits on r from Planck temperature data are dominated by large angular scales (low l) data which themselves display an anomaly, so  they should be taken with a grain of salt. The interesting claim here is that BICEP alone does not hint at r≠0, after using the most up-to-date information on galactic foregrounds and marginalizing over current uncertainties. In this respect, the paper by Michael and Uroš reaches similar conclusions as the analysis of Raphael Flauger and collaborators. The BICEP collaboration originally found that the galactic dust foreground can account for at most 25% of their signal. However, judging from scavenged Planck polarization data, it appears that BICEP underestimated the dust polarization fraction by roughly a factor 2. As this enters in square in the B-mode correlation spectrum, dust can easily account for all the signal observed in BICEP2. The new paper adds a few interesting details to the story. One is that not only the normalization but also the shape of the BICEP spectrum can be reasonably explained by dust if it scales as l^-2.3, as suggested by Planck data. Another is the importance of gravitational lensing effects (neglected by BICEP) in extracting the signal of tensor modes.  Although lensing dominates at high l, it also helps to fit the low l BICEP2 data with r=0. Finally, the paper suggests that it is not at all certain that the forthcoming Planck data will clean up the situation. If the uncertainty on the dust foreground in the BICEP patch is of order 20%, which look like a reasonable figure, the improvement over the current sensitivity to tensor modes may be marginal. So, BICEP may remain a Schrödinger cat for a little while longer.

6 comments:

  1. I think you are overstating the importance of this latest paper. Firstly, they assume a completely flat prior on the dust amplitude, which means they're (arbitrarily) allowing a dust model which can account for all the BICEP signal. So it's no surprise they find their model can account for all the BICEP signal. But this says nothing about what the real level of dust contamination is: this would require an actual measurement of dust foregrounds, which this paper dies not provide. We already knew that Planck and BICEP favour different r values.

    Also, there is a technical issue, which is that importance sampling of pre-produced Planck chains is a bad way of assessing likelihoods when you add new data which can potentially completely change the posterior probability. You can already see some of the artefacts this introduces in the jagged shape of those contours. It's not going to give sensible results for r=0.2. The proper approach is to re-run the chains from scratch.

    ReplyDelete
  2. The slope of the curve at low 'l' is negative for dust and positive for gravitational waves.
    As such, it's hard to explain how dust and gravitational lensing alone can explain the BICEP2+KECK results, as you've shown from the slides in your last post. A value of r between 0.1 and 0.2 seems like the best explication so far.

    But since BICEP2+KECK has only presented data at one frequency (and at a frequency that picks up dust if it's there), I think that we need to wait for independent confirmation or disconfirmation before stating anything with certainty.

    ReplyDelete
  3. Jester, looks like this conundrum will not be solved without either direct measurement of dust in the patch, or another frequency observation. Which one can we expect first?

    ReplyDelete
  4. Planck's dust maps should arrive first, in November they say, though it is not clear if they can clarify the situation completely. But in the coming years there will be several experiments observing this and other patches of the sky with better precision, so sooner or later we will know.

    ReplyDelete
  5. Wondering why people focus on the dust foreground so much, while in addition the high-l part of the spectrum is so badly reproduced by the "fitted" model. One would like to upscale the lensing contribution by at least a factor of 2, taking all the measurements into account .. This alone would bring BICEP2 r below 0.1 ..

    Or, an other (better IMHO) way to put it : the uncertainty in the scale factor afflicting the lensing mock-up in the fit should contribute to the systematics on r.

    Without selecting the 5 "best" high l bins, see their discussion of Fig 12.

    ReplyDelete
  6. Sesh, the flat prior on dust amplitude is actually the most conservative prior they could take. This is because the actual prior, based on Aumont presentation at ESLAB, and taking into account the amount of dust intensity in the bicep2 patch (which is known), the dust polarization amplitude should be at a level at or slightly above the maximal value allowed by bicep2 data. Aumont presentation does not give enough information to deduce what the width of this prior should be, it could very well be very narrowly peaked at that value. But the most conservative approach is to assume that the prior is flat.

    Also, when you add new data that do not favor one solution over the other (bicep2 data) to a data set that favors r=0 over r=0.2 (Planck data) then the combined posterior is dominated by the latter and also does not favor r=0.2: there is no need to rerun the chains, it won't change anything. Only if the two data sets were in actual disagreement this would be a valid concern, but there is no evidence that the two datasets are in disagreement.

    Anonymous: the high l is actually well reproduced by dust+lensing, see figure 3 of that paper: the data and the model are within the errors, once both noise and sampling variance errors are accounted for. No need to upscale lensing.

    ReplyDelete