Tuesday, 19 April 2022

How large is the W mass anomaly

Everything is larger in the US: cars, homes, food portions, people. The CDF collaboration from the now defunct Tevatron collider argues that this phenomenon is rooted in fundamental physics: 

The plot shows the most precise measurements of the mass of the W boson - one of the fundamental particles of the Standard Model. The lone wolf is the new CDF result. It is clear that the W mass is larger around the CDF detector than in the canton of Geneva, and the effect is significant enough to be considered as evidence.  More quantitatively, the CDF result is 

  • 3.0 sigma above the most precise LHC measurement by the ATLAS collaboration. 
  • 2.4 sigma above the more recent LHC measurement by the LHCb collaboration. 
  • 1.7 sigma above the combined measurements from the four collaborations of the LEP collider. 

All in all, the evidence that the W boson is heavier in the US than in Europe stands firm. (For the sake of the script I will not mention here that the CDF result is also 2.4 sigma larger than the other Tevatron measurement from the D0 collaboration, and 2.2 sigma larger than... the previous CDF measurement from 10 years before.) 

But jokes aside, what should we make of the current confusing situation?  The tension between CDF and the combination of the remaining mW measurements is whopping 4.1 sigma.  What value of mW should we then use in the Standard Model fits and new physics analyses? Certainly not the CDF one, some 6.5 away from the Standard Model prediction, because that value does not take into account the input from other experiments. At the same time we cannot just ignore CDF. In the end we do not know for sure who is right and who is wrong here. While most physicists tacitly assume that CDF has made a mistake, it is also conceivable that the other experiments have been suffering from the confirmation bias. Finally, a naive combination of all the results is not a sensible option either.  Indeed, at face value the Gaussian combination leads to mW = 80.410(7) GeV. This value is however not very meaningful from the statistical perspective: it's impossible to state,  with 68 percent confidence, that the true value of the W mass is between 80.403 and 80.417 GeV. That range doesn't even overlap with either of the most precise measurements from CDF and ATLAS!  (One should also be careful with Gaussian combinations because there can be subtle correlations between the different experimental results. Numerically, however, this should not be a big problem in the case at hand, as in the past the W mass results obtained via naive combinations were in fact very close to the more careful averages by Particle Data Group). Due to the disagreement between the experiments, our knowledge of the true value of mW is degraded, and the combination should somehow account for that.  

The question of combining information from incompatible measurements is a delicate one, residing at a boundary between statistics, psychology, and arts. Contradictory results are rare in collider physics, because of a small number of experiments and a high level of scrutiny. However, they are common in other branches of physics, just to mention the neutron lifetime or the electron g-2 as recent examples. To deal with such unpleasantness, Particle Data Group developed a totally ad hoc but very useful procedure. The idea is to penalize everyone in a democratic way, assuming that all experimental errors have been underestimated. More quantitatively, one inflates the errors of all the involved results until the χ^2 per degree of freedom in the combination is equal to 1.  Applying this procedure to the W mass measurements, it is necessary to inflate the errors by the factor of S=2.1, which leads mW = 80.410(15) GeV. 

The inflated result make more intuitive sense, since the combined 1 sigma band overlaps with the most precise CDF measurement, and lies close enough to the error bars from other experiments. If you accept that combination, the tension with the Standard Model stands at 3 sigma. This value fairly well represents the current situation: it is large enough to warrant further interest, but not large enough to claim a discovery of new physics beyond the Standard Model. 

The confusion may stay with us for long time. It will go away if CDF finds an error in their analysis, or if the future ATLAS updates shift mW significantly upwards.  But the most likely scenario in my opinion is that the Europe/US divide will only grow in time.  The CDF result could be eliminated from the combination when other experiments reach a significantly better precision. Unfortunately, this is unlikely to happen in the foreseeable future; new colliders and better theory calculations may be necessary to shrink the error bars well below 10 MeV. The conclusion is that particle physicists should shake hands with their nuclear colleagues and start getting used to the S-factors. 

Thursday, 8 April 2021

Why is it when something happens it is ALWAYS you, muons?

April 7, 2021 was like a good TV episode: high-speed action, plot twists, and a cliffhanger ending. We now know that the strength of the little magnet inside the muon is described by the g-factor: 

g = 2.00233184122(82).

Any measurement of basic properties of matter is priceless, especially when it come with this incredible precision.  But for a particle physicist the main source of excitement is that this result could herald the breakdown of the Standard Model. The point is that the g-factor or the magnetic moment of an elementary particle can be calculated theoretically to a very good accuracy. Last year, the white paper of the Muon g−2 Theory Initiative came up with the consensus value for the Standard Model prediction 

                                                                      g = 2.00233183620(86), 

which is significantly smaller than the experimental value.  The discrepancy is estimated at 4.2 sigma, assuming the theoretical error is Gaussian and combining the errors in quadrature. 

As usual, when we see an experiment and the Standard Model disagree, these 3 things come to mind first

  1.  Statistical fluctuation. 
  2.  Flawed theory prediction. 
  3.  Experimental screw-up.   

The odds for 1. are extremely low in this case.  3. is not impossible but unlikely as of April 7. Basically the same experiment was repeated twice, first in Brookhaven 20 years ago, and now in Fermilab, yielding very consistent results. One day it would be nice to get an independent confirmation using alternative experimental techniques, but we are not losing any sleep over it. It is fair to say, however,  that 2. is not yet written off by most of the community. The process leading to the Standard Model prediction is of enormous complexity. It combines technically challenging perturbative calculations (5-loop QED!), data-driven methods, and non-perturbative inputs from dispersion relations, phenomenological models, and lattice QCD. One especially difficult contribution to evaluate is due to loops of light hadrons (pions etc.) affecting photon propagation.  In the white paper,  this hadronic vacuum polarization is related by theoretical tricks to low-energy electron scattering and determined from experimental data. However, the currently most precise lattice evaluation of the same quantity gives a larger value that would take the Standard Model prediction closer to the experiment. The lattice paper first appeared a year ago but only now was published in Nature in a well-timed move that can be compared to an ex crashing a wedding party. The theory and experiment are now locked in a three-way duel, and we are waiting for the shootout to see which theoretical prediction survives. Until this controversy is resolved, there will be a cloud of doubt hanging over every interpretation of the muon g-2 anomaly.   

  But let us assume for a moment that the white paper value is correct. This would be huge, as it would mean that the Standard Model does not fully capture how muons interact with light. The correct interaction Lagrangian would have to be (pardon my Greek)

The first term is the renormalizable minimal coupling present in the Standard Model, which gives the Coulomb force and all the usual electromagnetic phenomena. The second term is called the magnetic dipole. It leads to a small shift of the muon g-factor, so as to explain the Brookhaven and Fermilab measurements.  This is a non-renormalizable interaction, and so it must be an effective description of virtual effects of some new particle from beyond the Standard Model. Theorists have invented countless models for this particle in order to address the old Brookhaven measurement, and the Fermilab update changes little in this enterprise. I will write about it another time.  For now, let us just crunch some numbers to highlight one general feature. Even though the scale suppressing the effective dipole operator is in the EeV range, there are indications that the culprit particle is much lighter than that. First, electroweak gauge invariance forces it to be less than ~100 TeV in a rather model-independent way.  Next, in many models contributions to muon g-2 come with the chiral suppression proportional to the muon mass. Moreover, they typically appear at one loop, so the operator will pick up a loop suppression factor unless the new particle is strongly coupled.  The same dipole operator as above can be more suggestively recast as  

The scale 300 GeV appearing in the denominator indicates that the new particle should be around the corner!  Indeed, the discrepancy between the theory and experiment is larger than the contribution of the W and Z bosons to the muon g-2, so it seems logical to put the new particle near the electroweak scale. That's why the stakes of the April 7 Fermilab announcement are so enormous. If the gap between the Standard Model and experiment is real, the new particles and forces responsible for it should be within reach of the present or near-future colliders. This would open a new experimental era that is almost too beautiful to imagine. And for theorists, it would bring new pressing questions about who ordered it. 

Thursday, 1 April 2021

April Fools'21: Trouble with g-2

On April 7, the g-2 experiment at Fermilab was supposed to reveal their new measurement of the magnetic moment of the muon.  *Was*, because the announcement may be delayed for the most bizarre reason. You may have heard that the data are blinded to avoid biasing the outcome. This is now standard practice, but the g-2 collaboration went further: they are unable to unblind the data by themselves, to make sure that there is no leaks or temptations. Instead, the unblinding procedure requires an input from an external person, who is one of the Fermilab theorists. How does this work? The experiment measures the frequency of precession of antimuons circulating in a ring. From that and the known magnetic field the sought fundamental quantity - the magnetic moment of the muon, or g-2 in short - can be read off.  However, the whole analysis chain is performed using a randomly chosen number instead of the true clock frequency. Only at the very end, once all statistical and systematic errors are determined,  the true frequency is inserted and the final result is uncovered. For that last step they need to type the secret code into this machine looking like something from a 60s movie: 

The code was picked by the Fermilab theorist, and he is the only person to know it.  There is the rub... this theorist now refuses to give away the code.  It is not clear why. One time he said he had forgotten the envelope with the code on a train, another time he said the dog had eaten it. For the last few days he has locked himself in his home and completely stopped taking any calls. 

The situation is critical. PhD students from the collaboration are working round the clock to crack the code. They are basically trying out all possible combinations, but the process is painstakingly slow and may take months, delaying the long-expected announcement.  The collaboration even got a permission from the Fermilab director to search the office of the said theorist.  But they only found this piece of paper behind the bookshelf: 

It may be that the paper holds a clue about the code. If you have any idea what the code may be please email fermilab@fnal.gov or just write it in the comments below. 


Update: a part of this post (but strangely enough not all) is an April Fools joke. The new g-2 results are going to be presented on April 7, 2021, as planned.  The code is OPE, which stands for "operator product expansion", which is an  important technique used in the theoretical calculation of hadronic corrections to muon g-2: 



Monday, 29 March 2021

Thoughts on RK

The hashtag #CautiouslyExcited is trending on Twitter, in spite of the raging plague. The updated RK measurement in LHCb has made a big splash and has been covered by every news outlet.  RK measures the ratio of the B->Kμμ and B->Kee decay probabilities, which the Standard Model predicts to be very close to one. Using all the data collected so far, LHCb instead finds RK = 0.846 with the error of 0.044. This is the same central value and 30% smaller error compared to their 2019 result based on half of the data.  Mathematically speaking, the update does not much change the global picture of the B-meson anomalies. However, it has an important psychological impact, which goes beyond the PR story of crossing the 3 sigma threshold. Let me explain why. 

For the last few decades, every deviation from the Standard Model prediction in a particle collider experiment would mean one of these 3 things:    

  1. Statistical fluctuation. 
  2. Flawed theory prediction. 
  3. Experimental screw-up.   

In the case of RK, the option 2. is not a worry.  Yes, flavor physics is a swamp full of snake pits, however in the RK ratio the dangerous hadronic uncertainties cancel out to a large extent, so that precise theoretical predictions are possible.  Before March 23 the biggest worry was option 1.  Indeed, 2-3 sigma fluctuations happen all the time at the LHC, due to a huge number of measurements being taken.  However, you expect statistical fluctuations to decrease in significance as more data is collected.  This is what seems to be happening to the sister RD anomaly, and the earlier history of RK was not very encouraging either (in the 2019 update the significance neither increased nor decreased).  The fact that, this time, the significance of the RK anomaly increased, more or less as you would expect it to assuming it is a genuine new physics signal, makes it unlikely that it is merely a statistical fluctuation.  This is the main reason for the excitement you may perceive among particle physicists these days. 

On the other hand,  option 3. remains a possibility.  In their analysis,  LHCb reconstructed 3850 B->Kμμ decays vs. 1640 B->Kee decays, but from that they concluded that decays to muons are less probable than those to electrons. This is because one has to take into account the different reconstruction efficiencies for muons and electrons. An estimate of that efficiency is the most difficult ingredient of the measurement,  and the LHCb folks have spent many nights of heavy drinking worrying about it. Of course, they have made multiple cross-checks and are quite confident that there is no mistake but... there will always be a shadow of a doubt until RK is confirmed by an independent experiment. Fortunately for everyone, a verification will be provided by the Belle-II experiment, probably in 3-4 years from now. Only when Belle-II sees the same thing we will breathe a sigh of relief and put all our money on option

4. Physics beyond the Standard Model 

From that point of view explaining the RK measurement is trivial.  All we need is to add a new kind of interaction between b- and s-quarks and muons to the Standard Model Lagrangian.  For example, this 4-fermion contact term will do: 

where Q3=(t,b), Q2=(c,s), L2=(νμ,μ). The Standard Model won't let you have this interaction because it violates one of its founding principles: renormalizability.  But we know that the Standard Model is just an effective theory, and that non-renormalizable interactions must exist in nature, even if they are very suppressed so as to be unobservable most of the time.  In particular, neutrino oscillations are best explained by certain dimension-5 non-renormalizable interactions.  RK may be the first evidence that also dimension-6 non-renormalizable interactions exist in nature.  The nice thing is that the interaction term above 1) does not violate any existing experimental constraints,  2) explains not only RK but also some other 2-3 sigma tensions in the data (RK*, P5'),  and 3) fits well with some smaller 1-2 sigma effects (Bs->μμ, RpK,...). The existence of a simple theoretical explanation and a consistent pattern in the data is the other element that prompts cautious optimism.  

The LHC run-3 is coming soon, and with it more data on RK.  In the shorter perspective (less than a year?) there will be other important updates (RK*, RpK) and new observables (Rϕ , RK*+) probing the same physics. Finally something to wait for.   

Saturday, 1 August 2020

Death of a forgotten anomaly

Anomalies come with a big splash, but often go down quietly. A recent ATLAS measurement, just posted on arXiv, killed a long-standing and by now almost forgotten anomaly from the LEP collider.  LEP was an electron-positron collider operating some time in the late Holocene. Its most important legacy is the very precise measurements of the interaction strength between the Z boson and matter, which to this day are unmatched in accuracy. In the second stage of the experiment, called LEP-2, the collision energy was gradually raised to about 200 GeV, so that pairs of W bosons could be produced. The experiment was able to measure the branching fractions for W decays into electrons, muons, and tau leptons.  These are precisely predicted by the Standard Model: they should be equal to 10.8%, independently of the flavor of the lepton (up to a very small correction due to the lepton masses).  However, LEP-2 found 

Br(W → τν)/Br(W → eν) = 1.070 ± 0.029,     Br(W → τν)/Br(W → μν) = 1.076 ± 0.028.

While the decays to electrons and muons conformed very well to the Standard Model predictions, 
there was a 2.8 sigma excess in the tau channel. The question was whether it was simply a statistical fluctuation or new physics violating the Standard Model's sacred principle of lepton flavor universality. The ratio Br(W → τν)/Br(W → eν) was later measured at the Tevatron, without finding any excess, however the errors were larger. More recently, there have been hints of large lepton flavor universality violation in B-meson decays, so it was not completely crazy to think that the LEP-2 excess was a part of the same story.  

The solution came 20 years later LEP-2: there is no large violation of lepton flavor universality in W boson decays. The LHC has already produced hundreds million of top quarks, and each of them (as far as we know) creates a W boson in the process of its disintegration. ATLAS used this big sample to compare the W boson decay rate to taus and to muons. Their result: 

Br(W → τν)/Br(W → μν) = 0.992 ± 0.013.

There is no slightest hint of an excess here. But what is most impressive is that the error is smaller,  by more than a factor of two, than in LEP-2! After the W boson mass, this is another precision measurement where a dirty hadron collider environment achieves a better accuracy than an electron-positron machine. 
Yes, more of that :)   

Thanks to the ATLAS measurement, our knowledge of the W boson couplings has increased substantially, as shown in the picture (errors are 1 sigma): 


The current uncertainty is a few per mille. This is still worse than for the Z boson couplings to leptons, in which case the accuracy is better than per mille, but we're getting there... Within the present accuracy, the W boson couplings to all leptons are consistent with the Standard Model prediction, and with lepton flavor universality in particular. Some tensions appearing in earlier global fits are all gone. The Standard Model wins again, nothing to see here, we can move on to the next anomaly. 

Wednesday, 17 June 2020

Hail the XENON excess

Where were we...  It's been years since particle physics last made an exciting headline. The result announced today by the XENON collaboration is a welcome breath of fresh air. It's too early to say whether it heralds a real breakthrough, or whether it's another bubble to be burst. But it certainly gives food for thought for particle theorists, enough to keep hep-ph going for the next few months.

The XENON collaboration was operating a 1-ton xenon detector in an underground lab in Italy. Originally, this line of experiments was devised to search for hypothetical heavy particles constituting dark matter, so called WIMPs. For that they offer a basically background-free environment, where a signal of dark matter colliding with xenon nuclei would stand out like a lighthouse. However all WIMP searches so far have returned zero, null, and nada. Partly out of boredom and despair, the xenon-based collaborations began thinking out-of-the-box to find out what else their shiny instruments could be good for. One idea was to search for axions. These are hypothetical superlight and superweakly interacting particles, originally devised to plug a certain theoretical hole in the Standard Model of particle physics. If they exist, they should be copiously produced in the core of the Sun with energies of order a keV. This is too little to perceptibly knock an atomic nucleus, as xenon weighs over a hundred GeV. However, many variants of the axion scenario, in particular the popular DFSZ model, predicts axions interacting with electrons. Then a keV axion may occasionally hit the cloud of electrons orbiting xenon atoms, sending one to an excited level or ionizing the atom. These electron-recoil events can be identified principally by the ratio of ionization and scintillation signals, which is totally different than for WIMP-like nuclear recoils. This is no longer a background-free search, as radioactive isotopes present inside the detector may lead to the same signal. Therefore collaboration have to search for a peak of electron-recoil events at keV energies.     

This is what they saw in the XENON1t data
Energy spectrum of electron-recoil events measured by the XENON1T experiment. 
The expected background is approximately flat from 30 keV down to the detection threshold at 1 keV, below which it falls off abruptly. On the other hand, the data seem to show a signal component growing towards low energies, and possibly peaking at 1-2 keV. Concentrating on the 1-7 keV range (so with a bit of cherry-picking), 285 events is observed in the data compared to an expected 232 events from the background-only fit. In purely statistical terms, this is a 3.5 sigma excess.

Assuming it's new physics, what does this mean? XENON shows that there is a flux of light relativistic particles arriving into their detector.  The peak of the excess corresponds to the temperature in the core of the Sun (15 million kelvin = 1.3 keV), so our star is a natural source of these particles (but at this point XENON cannot prove they arrive from the Sun). Furthermore, the  particles must couple to electrons, because they can knock xenon's electrons off their orbits. Several theoretical models contain particles matching that description. Axions are the primary suspects, because today they are arguably the best motivated extension of the Standard Model. They are naturally light, because their mass is protected by built-in symmetries, and for the same reason their coupling to matter must be extremely suppressed.  For QCD axions the defining feature is their coupling to gluons, but in generic constructions one also finds the pseudoscalar-type interaction between the axion and electrons e:

To explain the excess, one needs the coupling g to be of order 10^-12, which is totally natural in this context. But axions are by no means the only possibility. A related option is the dark photon, which differs from the axion by certain technicalities, in particular it has spin-1 instead of spin-0. The palette of viable models is certainly much broader, with  the details to be found soon on arXiv.           

A distinct avenue to explain the XENON excess is neutrinos. Here, the advantage is that we already know that neutrinos exist, and that the Sun emits some 10^38 of them every second. In fact, the background model used by XENON includes 220 neutrino-induced events in the 1-210 keV range.
However, in the standard picture, the interactions of neutrinos with electrons are too weak to explain the excess. To that end one has to either increase their flux (so fiddle with the solar model), or to increase their interaction strength with matter (so go beyond the Standard Model). For example, neutrinos could interact with electrons via a photon intermediary. While neutrinos do not have an electric charge, uncharged particles can still couple to photons via dipole or higher-multipole moments. It is possible that new physics (possibly the same that generates the neutrino masses) also pumps up the neutrino magnetic dipole moment. This can be described in a model-independent way by adding a non-renormalizable dimension-7 operator to the Standard Model, e.g.
   
To explain the XENON excess we need d of order 10^-6. That mean new physics responsible for the dipole moment must be just behind the corner, below 100 TeV or so.

How confident should we be that it's new physics? Experience has shown again and again that anomalies in new physics searches have, with a very large confidence, a mundane origin that does not involve exotic particles or interactions.  In this case, possible explanations are, in order of likelihood,  1) small contamination of the detector, 2) some other instrumental effect that the collaboration hasn't thought of, 3) the ghost of Roberto Peccei, 4) a genuine signal of new physics. In fact, the collaboration itself is hedging for the first option, as they cannot exclude the presence of a small amount of  tritium in the detector, which would produce a signal similar to the observed excess. Moreover, there are a few orange flags for the new physics interpretation:
  1.  Simplest models explaining the excess are excluded by astrophysical observations. If axions can be produced in the Sun at the rate suggested by the XENON result, they can be produced at even larger rates in hotter stars, e.g. in red giants or white dwarfs. This would lead to excessive cooling of these stars, in conflict with observations. The upper limit on the axion-electron coupling g from red giants is 3*10^-13, which is an order of magnitude  less than what is needed for the XENON excess.  The neutrino magnetic moment explanations faces a similar difficulty. Of course, astrophysical limits reside in a different epistemological reality; it is not unheard of that they are relaxed by an order of magnitude or disappear completely. But certainly this is something to worry about.  
  2.  At a more psychological level, a small excess over a large background near a detection threshold.... sounds familiar. We've seen that before in the case of the DAMA and CoGeNT dark matter experiments, at it didn't turn out well.     
  3. The bump is at 1.5 keV, which is *twice* 750 eV.  
So, as usual, more data, time, and patience is needed to verify the new physics hypothesis. On the experimental side, the near future is very optimistic, with the XENONnT, LUX-ZEPLIN, and PandaX-4T experiments all jostling for position to confirm the excess and earn eternal glory. On the theoretical side, the big question is whether the stellar cooling constraints can be avoided, without too many epicycles. It would be also good to know whether the particle responsible for the XENON excess could be related to dark matter and/or to other existing anomalies, in particular to the B-meson ones. For answers, tune in to arXiv, from tomorrow on. 

Wednesday, 20 June 2018

Both g-2 anomalies

Two months ago an experiment in Berkeley announced a new ultra-precise measurement of the fine structure constant α using interferometry techniques. This wasn't much noticed because the paper is not on arXiv, and moreover this kind of research is filed under metrology, which is easily confused with meteorology. So it's worth commenting on why precision measurements of α could be interesting for particle physics. What the Berkeley group really did was to measure the mass of the cesium-133 atom, achieving the relative accuracy of 4*10^-10, that is 0.4 parts par billion (ppb). With that result in hand, α can be determined after a cavalier rewriting of the high-school formula for the Rydberg constant:   
Everybody knows the first 3 digits of the Rydberg constant, Ry≈13.6 eV, but actually it is experimentally known with the fantastic accuracy of 0.006 ppb, and the electron-to-atom mass ratio has also been determined precisely. Thus the measurement of the cesium mass can be translated into a 0.2 ppb measurement of the fine structure constant: 1/α=137.035999046(27).

You may think that this kind of result could appeal only to a Pythonesque chartered accountant. But you would be wrong. First of all, the new result excludes  α = 1/137 at 1 million sigma, dealing a mortal blow to the field of epistemological numerology. Perhaps more importantly, the result is relevant for testing the Standard Model. One place where precise knowledge of α is essential is in calculation of the magnetic moment of the electron. Recall that the g-factor is defined as the proportionality constant between the magnetic moment and the angular momentum. For the electron we have
Experimentally, ge is one of the most precisely determined quantities in physics,  with the most recent measurement quoting a= 0.00115965218073(28), that is 0.0001 ppb accuracy on ge, or 0.2 ppb accuracy on ae. In the Standard Model, ge is calculable as a function of α and other parameters. In the classical approximation ge=2, while the one-loop correction proportional to the first power of α was already known in prehistoric times thanks to Schwinger. The dots above summarize decades of subsequent calculations, which now include O(α^5) terms, that is 5-loop QED contributions! Thanks to these heroic efforts (depicted in the film  For a Few Diagrams More - a sequel to Kurosawa's Seven Samurai), the main theoretical uncertainty for the Standard Model prediction of ge is due to the experimental error on the value of α. The Berkeley measurement allows one to reduce the relative theoretical error on adown to 0.2 ppb:  ae = 0.00115965218161(23), which matches in magnitude the experimental error and improves by a factor of 3 the previous prediction based on the α measurement with rubidium atoms.

At the spiritual level, the comparison between the theory and experiment provides an impressive validation of quantum field theory techniques up to the 13th significant digit - an unimaginable  theoretical accuracy in other branches of science. More practically, it also provides a powerful test of the Standard Model. New particles coupled to the electron may contribute to the same loop diagrams from which ge is calculated, and could shift the observed value of ae away from the Standard Model predictions. In many models, corrections to the electron and muon magnetic moments are correlated. The latter famously deviates from the Standard Model prediction by 3.5 to 4 sigma, depending on who counts the uncertainties. Actually, if you bother to eye carefully the experimental and theoretical values of ae beyond the 10th significant digit you can see that they are also discrepant, this time at the 2.5 sigma level. So now we have two g-2 anomalies! In a picture, the situation can be summarized as follows:

If you're a member of the Holy Church of Five Sigma you can almost preach an unambiguous discovery of physics beyond the Standard Model. However, for most of us this is not the case yet. First, there is still some debate about the theoretical uncertainties entering the muon g-2 prediction. Second, while it is quite easy to fit each of the two anomalies separately, there seems to be no appealing model to fit both of them at the same time.  Take for example the very popular toy model with a new massive spin-1 Z' boson (aka the dark photon) kinetically mixed with the ordinary photon. In this case Z' has, much like the ordinary photon, vector-like and universal couplings to electron and muons. But this leads to a positive contribution to g-2, and it does not fit well the ae measurement which favors a new negative contribution. In fact, the ae measurement provides the most stringent constraint in part of the parameter space of the dark photon model. Conversely, a Z' boson with purely axial couplings to matter does not fit the data as it gives a negative contribution to g-2, thus making the muon g-2 anomaly worse. What might work is a hybrid model with a light Z' boson having lepton-flavor violating interactions: a vector coupling to muons and a somewhat smaller axial coupling to electrons. But constructing a consistent and realistic model along these lines is a challenge because of other experimental constraints (e.g. from the lack of observation of μ→eγ decays). Some food for thought can be found in this paper, but I'm not sure if a sensible model exists at the moment. If you know one you are welcome to drop a comment here or a paper on arXiv.

More excitement on this front is in store. The muon g-2 experiment in Fermilab should soon deliver first results which may confirm or disprove the muon anomaly. Further progress with the electron g-2 and fine-structure constant measurements is also expected in the near future. The biggest worry is that, if the accuracy improves by another two orders of magnitude, we will need to calculate six loop QED corrections...