Plots borrowed from David Straub's talk in Moriond; see also the talk of Joaquim Matias with similar conclusions. David has a post with more details about the process and uncertainties. For a more popular write-up, see this article on Quanta Magazine.
Friday, 20 March 2015
LHCb: B-meson anomaly persists
Today LHCb released a new analysis of the angular distribution in the B0 → K*0(892) (→K+π-) μ+ μ- decays. In this 4-body decay process, the angles between the direction of flight of all the different particles can be measured as a function of the invariant mass q^2 of the di-muon pair. The results are summarized in terms of several form factors with imaginative names like P5', FL, etc. The interest in this particular decay comes from the fact that 2 years ago LHCb reported a large deviation from the standard model prediction in one q^2 region of 1 form factor called P5'. That measurement was based on 1 inverse femtobarn of data; today it was updated to full 3 fb-1 of run-1 data. The news is that the anomaly persists in the q^2 region 4-8 GeV, see the plot. The measurement moved a bit toward the standard model, but the statistical errors have shrunk as well. All in all, the significance of the anomaly is quoted as 3.7 sigma, the same as in the previous LHCb analysis. New physics that effectively induces new contributions to the 4-fermion operator (\bar b_L \gamma_\rho s_L) (\bar \mu \gamma_\rho \mu) can significantly improve agreement with the data, see the blue line in the plot. The preference for new physics remains remains high, at the 4 sigma level, when this measurement is combined with other B-meson observables.
So how excited should we be? One thing we learned today is that the anomaly is unlikely to be a statistical fluctuation. However, the observable is not of the clean kind, as the measured angular distributions are susceptible to poorly known QCD effects. The significance depends a lot on what is assumed about these uncertainties, and experts wage ferocious battles about the numbers. See for example this paper where larger uncertainties are advocated, in which case the significance becomes negligible. Therefore, the deviation from the standard model is not yet convincing at this point. Other observables may tip the scale. If a consistent pattern of deviations in several B-physics observables emerges, only then we can trumpet victory.
Plots borrowed from David Straub's talk in Moriond; see also the talk of Joaquim Matias with similar conclusions. David has a post with more details about the process and uncertainties. For a more popular write-up, see this article on Quanta Magazine.
Plots borrowed from David Straub's talk in Moriond; see also the talk of Joaquim Matias with similar conclusions. David has a post with more details about the process and uncertainties. For a more popular write-up, see this article on Quanta Magazine.
Subscribe to:
Post Comments (Atom)
11 comments:
Yes, while the the blue curve fits the data better in the middle of the energy values, it's actually worse that the standard model at the low and high energy values.
My questions are: does adding the term(s) that cause the curve to SM curve to upwards have any effects on other data sets? If so, does the inclusion of the non-SM term(s) become more or less statistically significant when including other data sets?
Yes, the global fit to a large number of B observables improves by ~4 sigma in the presence of the O9 four-fermion operator.
Jester,
What is your best bet at this point on this persisting anomaly:
1) leptoquarks?,
2) Z' bosons?,
3) SUSY partners?,
4) Higgs-like duplicates?
5) erroneous QCD background subtraction?
6) unknown non-perturbative effects?
7) none of the above?
Thanks,
Ervin Goldfain
As with any anomaly, the mundane explanations are always the most likely ones by far. So 5) or 6). If it's indeed new physics, then a new force is the most elegant explanation. So 2).
Hi Jester,
One of the comments on the article by Quanta Magazine mentions:
"They appear to have used a two-tailed test since 3.7σ would correspond to 0.01% using a one-tailed test".
Could you clarify?
Be gentle with me.
Michel
Well, the comment correctly says that if you integrate one tail of the Gaussian distribution form 3.7 sigma to +infinity, that corresponds to 0.01% of the total area under the curve. So 0.02% corresponds to integrating both tails, from -infinity to -3.7 sigma, and from 3.7 sigma to +infinity.
Now, what test statistics LHC should be using is a bit above my head. I would think that using a two-tailed test is pretty standard, but Kyle is an expert and probably knows better.
WOW! I actually got that.
Thanks man.
Michel
That Quanta article sure finishes on a sad note, with Nima's fragile heart and Jester's desperation. Do we really think that there will never be a bigger machine if nothing more is discovered at the LHC? Hasn't China been making noise about a Higgs factory in 2028? Surely that has at least some chance of happening, perhaps with international support.
Jester, anonymous who mentioned two-tail:
That's rather strange - the convention in ATLAS and CMS is one-tail, e.g. from the Higgs discovery (http://arxiv.org/abs/1207.7235)
"Both the local and global p-values can be expressed as a corresponding number of standard deviations using the one-sided Gaussian tail convention"
FWIW, 3.7\sigma with a two-tail convention is about 3.5\sigma with a one-tail convention. This in itself is a minor mistake but it damages my faith in their rigour.
I'm also worried about the "naive" result that two 2.9\sigma anomalies result in a 3.7\sigma anomaly. How were they added? This is a relatively big anomaly - many theorists might spend time on it. IMHO they should make a comprehensive evaluation that they are fully confident about before presenting it. I don't think they are following best practice.
Xezlec, a 100 TeV collider will happen one day, but it's far in the future, whereas we need something to do in the next 25 years. Higgs factory will be essential for precision measurements of the Higgs couplings, but it will not help exploring the high energy frontier. That's why there's this feeling that we badly need a discovery NOW.
The Higgs discovery is a one-sided thing - it is either there (increasing the number of events) or not, there is nothing that could produce a "negative peak".
The LHCb measurements are different, deviations in both directions are possible. It makes sense to use two-sided tests.
Post a Comment