The hashtag #CautiouslyExcited is trending on Twitter, in spite of the raging plague. The updated RK measurement in LHCb has made a big splash and has been covered by every news outlet. RK measures the ratio of the B->Kμμ and B->Kee decay probabilities, which the Standard Model predicts to be very close to one. Using all the data collected so far, LHCb instead finds RK = 0.846 with the error of 0.044. This is the same central value and 30% smaller error compared to their 2019 result based on half of the data. Mathematically speaking, the update does not much change the global picture of the B-meson anomalies. However, it has an important psychological impact, which goes beyond the PR story of crossing the 3 sigma threshold. Let me explain why.
For the last few decades, every deviation from the Standard Model prediction in a particle collider experiment would mean one of these 3 things:
- Statistical fluctuation.
- Flawed theory prediction.
- Experimental screw-up.
In the case of RK, the option 2. is not a worry. Yes, flavor physics is a swamp full of snake pits, however in the RK ratio the dangerous hadronic uncertainties cancel out to a large extent, so that precise theoretical predictions are possible. Before March 23 the biggest worry was option 1. Indeed, 2-3 sigma fluctuations happen all the time at the LHC, due to a huge number of measurements being taken. However, you expect statistical fluctuations to decrease in significance as more data is collected. This is what seems to be happening to the sister RD anomaly, and the earlier history of RK was not very encouraging either (in the 2019 update the significance neither increased nor decreased). The fact that, this time, the significance of the RK anomaly increased, more or less as you would expect it to assuming it is a genuine new physics signal, makes it unlikely that it is merely a statistical fluctuation. This is the main reason for the excitement you may perceive among particle physicists these days.
On the other hand, option 3. remains a possibility. In their analysis, LHCb reconstructed 3850 B->Kμμ decays vs. 1640 B->Kee decays, but from that they concluded that decays to muons are less probable than those to electrons. This is because one has to take into account the different reconstruction efficiencies for muons and electrons. An estimate of that efficiency is the most difficult ingredient of the measurement, and the LHCb folks have spent many nights of heavy drinking worrying about it. Of course, they have made multiple cross-checks and are quite confident that there is no mistake but... there will always be a shadow of a doubt until RK is confirmed by an independent experiment. Fortunately for everyone, a verification will be provided by the Belle-II experiment, probably in 3-4 years from now. Only when Belle-II sees the same thing we will breathe a sigh of relief and put all our money on option
4. Physics beyond the Standard Model
From that point of view explaining the RK measurement is trivial. All we need is to add a new kind of interaction between b- and s-quarks and muons to the Standard Model Lagrangian. For example, this 4-fermion contact term will do:
where Q3=(t,b), Q2=(c,s), L2=(νμ,μ). The Standard Model won't let you have this interaction because it violates one of its founding principles: renormalizability. But we know that the Standard Model is just an effective theory, and that non-renormalizable interactions must exist in nature, even if they are very suppressed so as to be unobservable most of the time. In particular, neutrino oscillations are best explained by certain dimension-5 non-renormalizable interactions. RK may be the first evidence that also dimension-6 non-renormalizable interactions exist in nature. The nice thing is that the interaction term above 1) does not violate any existing experimental constraints, 2) explains not only RK but also some other 2-3 sigma tensions in the data (RK*, P5'), and 3) fits well with some smaller 1-2 sigma effects (Bs->μμ, RpK,...). The existence of a simple theoretical explanation and a consistent pattern in the data is the other element that prompts cautious optimism.
The LHC run-3 is coming soon, and with it more data on RK. In the shorter perspective (less than a year?) there will be other important updates (RK*, RpK) and new observables (Rϕ , RK*+) probing the same physics. Finally something to wait for.
Nice thoughts! about "some other 1-3 sigma tensions in the data": could some of them suffer from similar "possible Experimental screw-ups"? in this case, would we expect seeing a consistent pattern of deviations? Is there any "possible Experimental screw-up" you would highlight?
ReplyDeleteNice post.
ReplyDeleteMay I ask which side of the 'even money' bet you currently fall, Jester?
Bertie, I would still bet against new physics, but with much less convictions than e.g. during SUSY bets :) If I had to put a number on the odd, it would be 10-20% for new physics.
ReplyDeleteAnon, some of these observables in tension are not "clean", and the apparent deviations could be just due to wrong theoretical predictions. As for the clean ones, like RK* or RpK, I don't know the nuts and bolts of the analyses enough, but naively it seems possible that an unknown "screw-up" in RK also feeds into those other analyses.
" But we know that the Standard Model is just an effective theory, and that non-renormalizable interactions must exist in nature"
ReplyDeleteYou could even state that a non renormalizable theory is the root to a deeper understanding, of v-course Fermi ->qed
Thanks for your thoughts!
Been a looooooong time, but isn’t the coefficient of the dimension five operator that gives rise to neutrino masses roughly the reciprocal of th GUT scale, which makes a fair bit of sense. What are you integrating out at 40 TeV?
ReplyDeleteThat is the question. The two main proposals on the market are Z' bosons and leptoquarks, but all models so far are ugly to the point of being offensive.
ReplyDeleteJester,
ReplyDeleteIs the LHCb anomaly in conflict with the previous report by the ATLAS collaboration?
http://resonaances.blogspot.com/2020/08/death-of-forgotten-anomaly.html
Ervin
The two are not directly related. The LHCb sees lepton flavor universality violation in B-meson decays. LEP saw lepton flavor universality violation in W boson decays, but this is now ruled out by more precise measurements at the LHC and the Tevatron. In the past, some theorists tried to connect the two anomalies, and those models are now disfavored. But it's easy to cook up a model where the B-meson decays are affected while the W boson decays look SM-like.
ReplyDelete"... it's easy to cook up a model where the B-meson decays are affected while the W boson decays look SM-like"
ReplyDeleteBut isn't such a cooked up model a contrived attempt to "force" BSM physics at the LHC?
No. Simplest models with a single Z' or a single leptoquark do not predict LFU violation in W decays. So there is no need for any conspiracy to avoid signals in W decays. More complicated models, where there is also a W' partner to Z', may affect W decays. But these are more contrived imo, although that is of course subjective.
ReplyDeleteI don't disagree, which BSM model appears more contrived at this point is clearly a matter of subjective bias.
ReplyDeleteWith no clear direction in sight, we may soon see "ambulance chasing" being turned on again and mirroring the case of the 750 GeV anomaly.
Nice to see new posts here. This is an excellent post.
ReplyDeleteI have a comment/question.
You write about neutrino oscillations being best described by dimension-5 non-renormalizable interaction. My understanding is that when such an operator is used and non-zero Dirac terms are allowed, eg type 1 Seesaw mechanism, then the theory is then renormalizable. I'm not a neutrino physicist but I've found the seesaw mechanism to be the most plausible extension of the SM, not least since it seems to keep the renormalisability criteria used to build the SM in the first place.
Am I wrong about a type 1 seesaw theory being renormalisable ?
You're totally right: the type-1 see-saw is a renormalizable UV completion of the SM. What I meant in this passage is an alternative viewpoint, which is however equivalent for all practical purpose. At the electroweak scale, 100-1000 GeV, the only available degrees of freedom are, most likely, those of the SM. The additional neutrinos that come with the see-saw are heavy (again, most likely), maybe even as heavy as 10^15 GeV. Therefore we cannot produce them in our colliders. All the effects of these heavy neutrinos that we can see are adequately described by the said dimension-5 operators. The is the effective theory thinking, where we get rid of all heavy unavailable degrees of freedom in the theory, at the price of losing renormalizability. This is practical and theoretically ok, until one day we reach the scale where we can produce the new particles from beyond the SM, be it the see-saw neutrinos or something else.
ReplyDeleteThere are significant backgrounds in the nonresonant ee channel. I wonder if the 'experimental screwup' could be mismodelled backgrounds / signal shape rather than bad efficiencies for which I find the cross-checks quite convincing.
ReplyDelete"But we know that the Standard Model is just an effective theory, and that non-renormalizable interactions must exist in nature, even if they are very suppressed so as to be unobservable most of the time."
ReplyDeleteHow can you be so sure? QCD seems to work on all scales, for example. Even if you find some perturbatively non-renormalizable interaction, you could still get away with it if there's a non-trivial fixed point of the RG flow. I'm not saying that the latter possibility is more plausible, and I'm an expert in neither. But after decades of not finding any clear signs of New Physics, maybe we should re-entertain the notion that the Standard Model is in fact fundamental, and that we just don't understand yet how it all comes about.
I actually agree that one should entertain the possibility that the SM is more fundamental than we think. However, neutrino oscillations strongly suggest that at least dimension-5 non-renormalizable operators should be added to the SM Lagrangian (the other option - Dirac neutrinos - is much uglier theoretically). My "we know" is mostly based on this insight, plus some circumstantial evidence from dark matter and baryogenesis. Note that I'm not saying that the true fundamental theory is non-renormalizable - my statement refers to the effective theory at the electroweak scale.
ReplyDelete"the other option - Dirac neutrinos - is much uglier theoretically"
ReplyDeleteUglier in which sense? To me, the SM looks prettier with Dirac neutrinos than without. Is this an eye-of-the-beholder thing?
Yes, this is subjective to a large extent. But there is one objective point here. Once you add right-handed neutrinos, the local symmetries of the SM allow you to write down the Majorana mass term for them. So pure Dirac neutrinos are not natural from the theoretical point of view, unless you do more gymnastics, for example declare that the global symmetry L or B-L is exact. In this sense Dirac neutrinos are uglier.
ReplyDelete