[...So, Resonaances is back with its trademark pessimism and frustration. But here is a post with a glimmer of hope and a bit more substance...]
LHCb recently reported an anomaly in the angular distribution of B0 → K*0 (→K+π-) μ+ μ- decays. The discreet charm of flavor physics is that even trying to understand which process is being studied may give you a serious migraine. So let's first translate to English. B0 is a pseudoscalar meson made of an anti-b- and a d-quark that is easily found in the junk produced by LHC collisions. K*0, actually K*0(892) because they come in variety of masses, is a vector meson made of an anti-s and a d-quark which promptly decays to a usual charged kaon and a pion. B0 → K*0(→K+π-) μ+ μ-, in the following simply referred to as B → K*μμ, is a rare decay occurring with the branching fraction of order 10^-7. Of course there's also the conjugate process where each particle is replaced with its anti-particle, and the two are dumped together in the LHCb analysis. Some properties of this decay have been studied before at the B-factories, Tevatron, and LHC, without finding anything unexpected. The new thing about the latest LHCb analysis is that they study the full monty differential distribution with respect to the 4 variables characterizing this four-body decay process: 3 angles θK,θl and φ (see the picture) and the invariant mass q^2 of the di-muon pair. A parametrization of that differential distribution is
Basically, LHCb measured all these Sn and FL coefficients as a function of q^2. The largest anomaly is observed at low q^2 in the parameter S5 (also presented as P5' which is S5 rescaled a function of FL). LHCb quantifies it is a 3.7 sigma deviation from the Standard Model in the region 4.3≤q^2≤8.68 GeV^2; this is downgraded to 2.5 sigma if the look-elsewhere effect is taken into account. Theorists fitting the data quote the deviation between 1 and 4.5 sigma, depending on theoretical assumptions and how the data are sliced and cooked.
The interesting question is whether new physics could be responsible for the anomaly. To go beyond a yes/no answer one has to, unfortunately, go through a bit of technicalities. At the parton level, the relevant process is the b→sμ+μ- decay. Theorists computing the B → K*μμ decay thus start from an effective interaction Lagrangian with 4-fermion and dipole operators involving the b- and s-quarks. The operators relevant for this process are
where Λref≈35 TeV. This set of operators allows one to describe the B → K*μμ decays in a completely model independent way, whether within the Standard Model or in some new physics scenario. In the Standard Model a subset of these operators is generated (see the diagrams) with the coefficients C7, C9 and C10 of order 1 (the suppression scale of the effective operators is tens of TeV due to the loop suppression, and also due to the CKM suppression via the small Vbs matrix element; this is why B → K*μμ is so sensitive to new physics). New physics could provide additional contributions to these 3 operators or produce the C' operators that are not generated in the Standard Model at all. For example, the tree-level exchange of a Z' boson coupled to leptons and, in a flavor violating way, to quarks could affect C9 and C9'; the dipole operators C7 and C7' could be generated e.g. by loop diagrams with a charged Higgs boson, and so on. Now, all of these operators affect the angular distribution of B → K*μμ decays, in particular they can shift the anomalous observable S5/P5'. But one should be careful not to screw up the other observables that remain in a good agreement with the Standard Model. Moreover, the same operators also affect countless other processes in the B-meson sector, including the well measured branching fractions for B → Xs γ and Bs→μμ decays. Thus, it is a non-trivial question whether a consistent solution to the anomaly can be found. The answer is that that, indeed, there do exist regions in the parameter space where the fit to the data is much better than in the Standard Model. According to this paper, the best scenario is the one where new physics generates simultaneously C9 and C9', with the contribution to C9 similar in size but opposite in sign to the Standard Model effective contribution. Other combinations of the operators can also improve the fit, but the gain is less striking.
So, the verdict is... well, at this point the anomaly is not utterly
solid yet. One warning flag is that it shows up in a complicated
angular analysis rather than in a clean and simple observable, which
gives more opportunities for theories and experimenters alike to commit
a subtle error in the analysis. Moreover, in order to explain the
anomaly, new physics contributions to the B → K*μμ amplitude need to be
of the same order of magnitude as the Standard Models ones, which
requires a certain degree of conspiracy. Most likely, the experimental
data and the Standard Model predictions will approach each other when
more data is analyzed, as it has happened countless times in the
past. Nevertheless, we're looking forward to the future updates on B →
K*μμ with a little more anticipation than usual. Note that the current
LHCb analysis includes only the 7 TeV run data; the twice as large 8
TeV sample is still waiting to see the light...
[Most pictures stolen from Nicola Serra's talk at EPS]
8 comments:
It all boils down to the error bars on the SM predictions. Alex Kagan's talk at DPF2013 had some slides on the updated analysis from Jager et al., with significantly larger errors than Descotes-Genon et al..
( https://indico.bnl.gov/getFile.py/access?contribId=13&sessionId=0&resId=0&materialId=slides&confId=603 )
Thanks for this very interesting analysis. It's a bit hard for a non-particle physicist to understand the background of the deviation from the LHCb paper. This critical note helped :)
we can't seriously claim the low-energy hadronic contributions in B->K^* form factors are fully under control. just don't trust the SM error bars... that's another aspect of the discreet charm of flavour physics.
It is very instructive in order to understand better the difference between doing an analysis with P_i^prime observables or Si observables to look at Fig11 of http://arxiv.org/pdf/1207.2753.pdf
On the left P1 computed with two different form factor parametrizations (you cannot distinguish), on the right S3 computed using those two parametrizations (green versus gray band). Now you understand the difference between a clean observable (P_i or P_iprime) and a form factor dependent one (S_i) and the difference in sensitivity/robustness.
Dear Anonymous (not really),
it is true that the S_i are more dependent on the choice of form factors than the P_i', so I'm all for using the P_i'. Still, it is reassuring that an analysis using the S_i does not see any spurious tensions that are not seen in the P_i' and that would indicate an underestimation of form factor uncertainties.
In any case, if the tensions are due to underestimated SM errors, it has to be due to some non-factorizable effects and not due to form factors, as the analysis with P_i' shows.
Cheers,
David
I am the not so anonymous :). Indeed what is really relevant and on this we fully agree is that both analysis points clearly in the same direction. It is important to have two analysis (or even more) arriving to the same conclusions, of course, with differences (obviously) but not on the main direction, and this is smtg unique in B->K*mumu.
It is nice, at least, to have empirical data that can drive BSM theories instead of having entirely model driven experiments that simply lead to exclusion ranges for parameters in BSM theories.
"So let's first translate to English. B0 is a pseudoscalar meson made of an anti-b- and a d-quark that is easily found in the junk produced by LHC collisions."
So much clearer now! Elegant and jargon-free. Hmm...
Post a Comment