April 7, 2021 was like a good TV episode: high-speed action, plot twists, and a cliffhanger ending. We now know that the strength of the little magnet inside the muon is described by the g-factor:

g = 2.00233184122(82).

Any measurement of basic properties of matter is priceless, especially when it come with this incredible precision. But for a particle physicist the main source of excitement is that this result could herald the breakdown of the Standard Model. The point is that the g-factor or the magnetic moment of an elementary particle can be calculated theoretically to a very good accuracy. Last year, the white paper of the *Muon g−2 Theory Initiative* came up with the consensus value for the Standard Model prediction

g = 2.00233183620(86),

which is significantly smaller than the experimental value. The discrepancy is estimated at 4.2 sigma, assuming the theoretical error is Gaussian and combining the errors in quadrature.

As usual, when we see an experiment and the Standard Model disagree, these 3 things come to mind first

- Statistical fluctuation.
- Flawed theory prediction.
- Experimental screw-up.

*hadronic vacuum polarization*is related by theoretical tricks to low-energy electron scattering and determined from experimental data. However, the currently most precise lattice evaluation of the same quantity gives a larger value that would take the Standard Model prediction closer to the experiment. The lattice paper first appeared a year ago but only now was published in Nature in a well-timed move that can be compared to an ex crashing a wedding party. The theory and experiment are now locked in a three-way duel, and we are waiting for the shootout to see which theoretical prediction survives. Until this controversy is resolved, there will be a cloud of doubt hanging over every interpretation of the muon g-2 anomaly.

But let us assume for a moment that the white paper value is correct. This would be huge, as it would mean that the Standard Model does not fully capture how muons interact with light. The correct interaction Lagrangian would have to be (pardon my Greek)

The first term is the renormalizable minimal coupling present in the Standard Model, which gives the Coulomb force and all the usual electromagnetic phenomena. The second term is called the magnetic dipole. It leads to a small shift of the muon g-factor, so as to explain the Brookhaven and Fermilab measurements. This is a non-renormalizable interaction, and so it must be an effective description of virtual effects of some new particle from beyond the Standard Model. Theorists have invented countless models for this particle in order to address the old Brookhaven measurement, and the Fermilab update changes little in this enterprise. I will write about it another time. For now, let us just crunch some numbers to highlight one general feature. Even though the scale suppressing the effective dipole operator is in the EeV range, there are indications that the culprit particle is much lighter than that. First, electroweak gauge invariance forces it to be less than ~100 TeV in a rather model-independent way. Next, in many models contributions to muon g-2 come with the *chiral suppression* proportional to the muon mass. Moreover, they typically appear at one loop, so the operator will pick up a loop suppression factor unless the new particle is strongly coupled. The same dipole operator as above can be more suggestively recast as

The scale 300 GeV appearing in the denominator indicates that the new particle should be around the corner! Indeed, the discrepancy between the theory and experiment is *larger *than the contribution of the W and Z bosons to the muon g-2, so it seems logical to put the new particle near the electroweak scale. That's why the stakes of the April 7 Fermilab announcement are so enormous. If the gap between the Standard Model and experiment is real, the new particles and forces responsible for it should be within reach of the present or near-future colliders. This would open a new experimental era that is almost too beautiful to imagine. And for theorists, it would bring new pressing questions about who ordered it.

## 35 comments:

and yet... where do you put your money? ;p

I think it has a very good chance to be real. But it's hard to pick a model - nothing strikes me as attractive.

Good to see so much activity here! "It combines ... non-perturbative inputs from dispersion relations, phenomenological models ..." Which "phenomenological models"?

Chiral perturbation theory. In addition, some HLbL calculations use more fancy inputs, like Regge models, but my understanding is that the reliance on this is diminishing, thanks to the lattice and more controllable dispersive techniques.

I think you have to cyclic shift the cast of characters to the left :-) What happened today is that the good experiment killed the bad theory prediction and splits the gold with the ugly one. It will probably take a few years to convince those who bet their career on it, but if you look at it without the hype and excitement googles it's pretty clear already.

It's kind of sad to see what road particle physics is heading down these days. If there were two competing predictions a generation or so ago and the experiment vindicates one, then the question was basically settled. These days we seem to make a living of holding on to the last straw as long as possible.

I disagree: the ugly is the one who digs, so he is clearly an experimentalist.

Why does the g-2 anomaly have to be BSM physics, and not a manifestation of non-equilibrium dynamics near or above the Fermi scale?

https://www.researchgate.net/publication/343426044_Derivation_of_the_Muon_g-2_Anomaly_from_Non-Equilibrium_Dynamics

"f there were two competing predictions [...]"

Exactly this! Imagine a world having no BNL experiment but two theoretical predictions for g-2 based on different approaches. Let Fermilab come in, how you do think the community would've reacted? I find hard to imagine dispersive theory + data can do a better job than 'ab initio' QCD lattice calculations. At some point we should start trusting the computers ;)

@anonymous: you can also imagine how dispersive theory+data reproduced independently by different groups could be more trustworthy than an ab initio lattice calculation pushed to its limits using a variety of new methods by a single group who already had to revise their value following an error. Luckily the imagination of pundits is irrelevant and we just have to wait for experts to sort through the technical details. In the meantime I would say neither extreme scepticism nor extreme optimism is more justified either way.

Are you aware of another Lattice work attracting so much attention/being so relevant in the short term (in the meantime no other HVP Lattice result verifies or falsifies it)? (well, answering this question requires some subjectivity...)

NOTHING has ever attracted as much attention as g-2, but the phenomenon of important and sometimes controversial lattice papers is not new. One recent example is CalLat calculation of gA, which was also published in Nature.

I think that we have to be Bayesians here and curb our enthusiasm either way. We have to test new theories with increased number of free variables against all relevant data. As Jester and others have discussed previously, the electron's anomalous magnetic dipole moment is in the opposite direction...i.e. experimental value is less than the theory. Plus, there are a number of other particle physics experimental values that should be included when testing new models.

Models that can fit all of the data will likely require a number of new free parameters.

In the Bayesian Information Criterion (BIC), you "punish" theories with more free variables, such that they have to significantly decrease the chi squared fit in order to make up for additional free variables and how they fit all relevant data.

The Standard Model should be the Standard Model until a new theory can achieve a 5sigma lower BIC value...when including all relevant experimental measurements into the calculation of the chi squared and when including the term in the BIC equation that "punishes" theories for additional free parameters.

The experimental team at Fermi Lab did an amazing experimental feat. But they should have done more to contain the media hype about breaking the standard model. Experimental measurements don't break a model...what breaks a model is a new model that fit all of the data better with as few free parameters as possible.

Would that indicate that New Physics preferentially couples to light particles rather than heavy ones (top, bottom, W/Z..), or just that this is the first occurrence that we noticed historically?

My thoughts on yesterday's events are :

(1) The Fermilab management, g-2 experiment folk and members of the Theory Initiative will be suffering from quite a hangover today. The party didn't go according to plan. There will be discussions as to whether this was avoidable or not.

(2) To the outside world yesterday's announcement can look like g-2 and the Theory Initative cherry-picking their favourite calculations. I know that plausible reasons were given for not including the lattice calculation (eg needs to be reproduced etc.) . However, it would be good to be convinced that these were the real reasons for not including it i.e. would the lattice calculation have been shown if they had agreed with the other estimates?.

(3) A lot of progress has been made on the g-2 issue. We now have more confidence that the earlier experimental result was correct. The accuracy of the SM theory calculations remains the biggest obstacle in freely interpreting the anomaly as evidence for new physics. To an extent this was true before yesterday but only because we had (IMO) taken the accuracy of the earlier measurement for granted.

Regardless of the final outcome of the g-2 anomaly, the FNAL report is a "shot in the arm" for both experimental and particle physics. And it is a good reason to celebrate.

Riccardo, I think that the fantastic precision of some muon observables is the reason why we see new physics there first. One cannot make a case yet that new physics couples more strongly to muons than to the 3rd generation.

Definitely we need a new muon collider.

Concerning reliability of lattice results of Fodor et al (https://arxiv.org/pdf/2002.12347.pdf): their plot 3 shows that they agree reasonably well with OTHER lattice estimations (with larger error bars). They do acknowledge that there's a tension with the R-ratios, but it does not appear to be that significant.

Where could I about this type of reasoning? Should the minimal coupling and dipole moment just be added in the QED equation [1]? why recasting dipole moment in the coulomb like form more suggestively indicate a new particle around the corner?

[1]:https://en.wikipedia.org/wiki/List_of_equations_in_nuclear_and_particle_physics#Fundamental_forces

read about, sorry

Jester: "I disagree: the ugly is the one who digs, so he is clearly an experimentalist."

Angel Eyes: "Two can dig a lot quicker than one...."

Anonymous: "by a single group who already had to revise their value following an error."

You probably heard this argument made in the FNAL theory talk by Al Khadra. Go check the youtube version

https://www.youtube.com/watch?v=81PfYnpuOPA

and you will see that they have edited that statement out. They probably did it because it is simply wrong.

The published value is off by 0.2 sigma from the preprint value. After one year of intense community feedback on the analysis in the preprint this is really not surprising.

Daniel, the important point is the scale 300 GeV appearing in the denominator. It means that the g-2 anomaly can be explained by a 300 GeV particle with order one coupling to muons, or a 3 TeV particle with order 10 coupling to muons, or a 30 GeV particle with order 0.1 coupling to muons.... The underlying concepts are shortly explained in https://en.wikipedia.org/wiki/Effective_field_theory but to go deeper one unfortunately has to refer to technical review papers.

Yes, you should add the dipole to the QED equations. It's actually more subtle because you have to do that anyway, because heavier Standard Model particles effectively induce the dipole too. If the anomaly is real, we have to add *more* of a dipole than what the Standard Model prescribes.

Unknown: the BMW value has changed by one sigma after one year. More precisely, in units of the 10^-11 they quote a_\mu^{L0-HVP} = 7124(45), 7087(53), 7075(55) in V1 (Feb'20), V2 (Aug'20), and Nature (Apr'21), respectively. Not that I think that there is anything wrong with that - it is completely healthy that preprint values evolve slightly after feedback and criticism from the community.

Thanks, I had heard of effective field theory but not of the standard model inducing dipoles, googling I found https://en.wikipedia.org/wiki/Electron_electric_dipole_moment which seems to contain terms you had written here.

I realize I missed to ask about what you said previous to suggesting 300 GeV scale where you say that to start with electroweak gauge invariance forces it first to be less than ~100 TeV, why is that? I google it too but couldn't find the reason.

I didn't explain it, and it's not trivial. The point is that the dipole operator displayed in the blog post is not gauge invariant under the full SU(3)xSU(2)xU(1) symmetry of the Standard Model. To make it gauge invariant you need to include the Higgs field H, and the operator becomes ~ 1/(100 TeV)^2 H (\bar L \sigma_\mu\nu \mu_R) B_\mu\nu where L is the lepton doublet containing the left-hand muon and the muon neutrino. If you replace the Higgs with its vacuum expectation value, = (0,v), v \sim 200 GeV you obtain the muon dipole operator from the blog post. Because the scale appearing in the denominator of the gauge invariant operator is 100 TeV, the maximum mass of the particle that generates it is 100 TeV. Sorry if it's too technical but I have no good idea how to explain it better.

I wouldn't be able to explain why the term looks like that or to prove that replacing the Higgs vacuum expectation value we recover the muon dipole operator you had previously written but overall I think I get your explanation so I appreciate.

I wouldn't like to spam you with questions but your answer gave me another doubt I must ask, as I understand the FF term in the QED equation represents the electromagnetic fields which arise after imposing gauge invariance to it then what you mean is that the Higgs field can also obtained in this way from the dipole term? I had the idea the the Higgs field was just artificially added to give mass to something called the Goldstone bosson which seems to appear from symmetry breaking I think or something like that (although I believe this simplified idea is wrong because how could Weinberg predicted the the Z and W mass if he had no Higgs to give mass to the W and Z boson when he did it)

"... assume ... that the Standard Model does not fully capture how muons interact with light ..."

My guess is that the experimental result indicates that the theorists need to incorporate virtual axions into their computer calculations. Consider a crude argument based upon profound ignorance.

2.0023318412 / 2.0023318362 = 1.0000000025 approximately.

The conventional wisdom is that the axion rest mass is in the range 10^–5 to 10^–3 eV/c^2 .

The muon rest mass = 105.6583755(23) MeV/c^2 .

10^–4 / (105.6583755 * 10^6 ) = 9.464465 * 10^–13 approximately.

(9.464465 * 10^–13)^(2/3) = 9.8 * 10^–9 approximately.

9.8 * 10^–9 compared to 2.5 * 10^–9 indicates that some cross section of the muon is reacting to the axion field????

@ David Brown

Keep in mind that axions are pseudoscalar particles whose existence is far from being conclusively proven. Their interaction with electrons is extremely weak with a coupling constant placed at O [10^(-13)]

https://arxiv.org/pdf/1311.1669.pdf

https://arxiv.org/pdf/1605.07668.pdf

Although solar axions offer an interesting explanation for the XENON1T excess (if the background effects are excluded), their existence appears to conflict with astrophysical data, see e.g.

https://arxiv.org/pdf/2006.09721.pdf

https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.125.131806

This paper discusses the difficulties for the axion explanation of muon g-2: http://arxiv.org/abs/2104.03267

Daniel: here comes another technical explanation :) In the Standard Model the role of the Higgs field is indeed to give masses, not only to W and Z, but also to fermions. In this latter case you can also think of the Higgs as the agent of chirality violation. Without the Higgs field chirality would be conserved, that is to say, left-handed polarized fermions would always stay left-handed, and idem for the right-handed ones. Why is this relevant? Because the dipole term I wrote violates chirality, flipping left-handed muons into right-handed ones, and vice-versa. This is a heuristic argument why the Higgs is involved in this story.

I'm grateful to your responses I learned a lot, thank you very much!

In the theoretical prediction, I noticed, components from individual gauge sector are mentioned. I was just wondering what happened to the cross terms from gauge mixing.

They are included. If my understanding is correct, mixed EW/QED contributions are a part of a_\mu^{EW}, and mixed QCD/QED contributions are a part of a_\mu^{HVP,NLO}.

Thanks for another great HEP western story Jester! Could it help if the Good and the Bad sorted out their problem before taking it up with the Ugly?)

Post a comment