Wednesday, 20 June 2018

Both g-2 anomalies

Two months ago an experiment in Berkeley announced a new ultra-precise measurement of the fine structure constant α using interferometry techniques. This wasn't much noticed because the paper is not on arXiv, and moreover this kind of research is filed under metrology, which is easily confused with meteorology. So it's worth commenting on why precision measurements of α could be interesting for particle physics. What the Berkeley group really did was to measure the mass of the cesium-133 atom, achieving the relative accuracy of 4*10^-10, that is 0.4 parts par billion (ppb). With that result in hand, α can be determined after a cavalier rewriting of the high-school formula for the Rydberg constant:   
Everybody knows the first 3 digits of the Rydberg constant, Ry≈13.6 eV, but actually it is experimentally known with the fantastic accuracy of 0.006 ppb, and the electron-to-atom mass ratio has also been determined precisely. Thus the measurement of the cesium mass can be translated into a 0.2 ppb measurement of the fine structure constant: 1/α=137.035999046(27).

You may think that this kind of result could appeal only to a Pythonesque chartered accountant. But you would be wrong. First of all, the new result excludes  α = 1/137 at 1 million sigma, dealing a mortal blow to the field of epistemological numerology. Perhaps more importantly, the result is relevant for testing the Standard Model. One place where precise knowledge of α is essential is in calculation of the magnetic moment of the electron. Recall that the g-factor is defined as the proportionality constant between the magnetic moment and the angular momentum. For the electron we have
Experimentally, ge is one of the most precisely determined quantities in physics,  with the most recent measurement quoting a= 0.00115965218073(28), that is 0.0001 ppb accuracy on ge, or 0.2 ppb accuracy on ae. In the Standard Model, ge is calculable as a function of α and other parameters. In the classical approximation ge=2, while the one-loop correction proportional to the first power of α was already known in prehistoric times thanks to Schwinger. The dots above summarize decades of subsequent calculations, which now include O(α^5) terms, that is 5-loop QED contributions! Thanks to these heroic efforts (depicted in the film  For a Few Diagrams More - a sequel to Kurosawa's Seven Samurai), the main theoretical uncertainty for the Standard Model prediction of ge is due to the experimental error on the value of α. The Berkeley measurement allows one to reduce the relative theoretical error on adown to 0.2 ppb:  ae = 0.00115965218161(23), which matches in magnitude the experimental error and improves by a factor of 3 the previous prediction based on the α measurement with rubidium atoms.

At the spiritual level, the comparison between the theory and experiment provides an impressive validation of quantum field theory techniques up to the 13th significant digit - an unimaginable  theoretical accuracy in other branches of science. More practically, it also provides a powerful test of the Standard Model. New particles coupled to the electron may contribute to the same loop diagrams from which ge is calculated, and could shift the observed value of ae away from the Standard Model predictions. In many models, corrections to the electron and muon magnetic moments are correlated. The latter famously deviates from the Standard Model prediction by 3.5 to 4 sigma, depending on who counts the uncertainties. Actually, if you bother to eye carefully the experimental and theoretical values of ae beyond the 10th significant digit you can see that they are also discrepant, this time at the 2.5 sigma level. So now we have two g-2 anomalies! In a picture, the situation can be summarized as follows:

If you're a member of the Holy Church of Five Sigma you can almost preach an unambiguous discovery of physics beyond the Standard Model. However, for most of us this is not the case yet. First, there is still some debate about the theoretical uncertainties entering the muon g-2 prediction. Second, while it is quite easy to fit each of the two anomalies separately, there seems to be no appealing model to fit both of them at the same time.  Take for example the very popular toy model with a new massive spin-1 Z' boson (aka the dark photon) kinetically mixed with the ordinary photon. In this case Z' has, much like the ordinary photon, vector-like and universal couplings to electron and muons. But this leads to a positive contribution to g-2, and it does not fit well the ae measurement which favors a new negative contribution. In fact, the ae measurement provides the most stringent constraint in part of the parameter space of the dark photon model. Conversely, a Z' boson with purely axial couplings to matter does not fit the data as it gives a negative contribution to g-2, thus making the muon g-2 anomaly worse. What might work is a hybrid model with a light Z' boson having lepton-flavor violating interactions: a vector coupling to muons and a somewhat smaller axial coupling to electrons. But constructing a consistent and realistic model along these lines is a challenge because of other experimental constraints (e.g. from the lack of observation of μ→eγ decays). Some food for thought can be found in this paper, but I'm not sure if a sensible model exists at the moment. If you know one you are welcome to drop a comment here or a paper on arXiv.

More excitement on this front is in store. The muon g-2 experiment in Fermilab should soon deliver first results which may confirm or disprove the muon anomaly. Further progress with the electron g-2 and fine-structure constant measurements is also expected in the near future. The biggest worry is that, if the accuracy improves by another two orders of magnitude, we will need to calculate six loop QED corrections... 

17 comments:

John Duffield said...

There's a issue Jasper, in that alpha is a running constant which varies with energy as per this NIST page. I'm confident that this means it varies with gravity (and that this in no way challenges GR). The Solar Probe Plus mission was going to test this, but like Clifford M Will says on page 14 of https://arxiv.org/abs/1403.7377, it's now going to be only a heliophysics mission. Shame. Another issue is that the mass of the electron (and the mass of the cesium atom), varies in line with gravitational potential, see the mass deficit. That's because the speed of light is not constant. I kid ye not. All the constants are sliding around, so if you can calculate something to huge precision, you have to be absolutely sure they all cancel one another. Gamma ray bursters say they aren't.

Clara, once known as Nemo said...

My prof at university used to say: never believe a measurement that gives a standard deviation with two significant digits. I tend to agree. One should be careful in drawing conclusions from any such value.

andrew said...

Could someone explain the final chart? I don't quite understand it.

Mad Hatter said...

The ellipses describe the combined likelihood for the muon and electron g-2. The best fit point (cross) is where the electron g-2 is smaller than the SM prediction and the muon g-2 is larger, with the SM point disfavored at 5 sigma. The dotted colored lines show the predictions for electron and muon g-2 in a toy model with a new 20 MeV Z' boson under different hypotheses for its vector and axial couplings to electrons and muons.

John Duffield said...

Aaargh! I said Jasper instead of Jester. And I should have said Mad Hatter! A thousand apologies. Re the gamma ray bursters, see the 2013 AMPS paper an apologia for firewalls. Tucked away in the conclusion is footnote 31, containing reference 87 to Friedwardt Winterberg’s 2001 paper gamma ray bursters and Lorentzian relativity. What he's saying is that if you dropped an electron or a Cesium atom into a black hole, it would be ripped apart into photons and neutrinos. I think he's right. So if you calculate something with great precision at sea level, there may be an issue because a measurement at the top of a mountain might yield a different result.

dlb said...

There is something in your post (thank you for it) I'm not sure I understand. You say:

"If you're a member of the Holy Church of Five Sigma you can now preach an unambiguous discovery of physics beyond the Standard Model. However, for most of us this is not the case yet. First, there is still some debate about the theoretical uncertainties entering the muon g-2 prediction. Second, while it is quite easy to fit each of the two anomalies separately, there seems to be no appealing model to fit both of them at the same time."

You seem to imply that, because we can't find "an appealing model to fit both of them at the same time" we should have doubts about these measurements. It seems to me that it should not matter, and that if the Standard Model predicts a combined result that is excluded at 5 sigma, the fact we can't find this appealing model makes no difference to what we should think of the Standard Model itself regarding this result. Or maybe I misinterpreted your argument?

Mad Hatter said...

That's right, I think that the question whether or not an appealing model exists is relevant for interpreting the anomalies. If it wasn't the case, I could cherry pick a number of anomalous results across particle physics (say MiniBoone ve appearance, LHCb B -> K l+l- branching fractions, 95 GeV diphoton resonance at CMS, etc. ), combine their likelihoods, and then claim an absurdly large evidence for physics beyond the SM. We would have certainly done that if there had been a sensible model explaining all of them, but this is not the case. In more hipster wording, we internally apply a Bayesian prior to correct our level of confidence in the experimental result. In the case at hand, I tend to think that at least one of the anomalies is due to a statistical fluctuation or a systematic error. If someone writes down a good model explaining both g-2 anomalies at the same time I may correct my level of confidence.

Loren said...

The FSC's effective value does vary with momentum transfer, but that variation is very well-understood, and the FSC's reference value is for the zero-momentum limit. There are similar conventions for other parameters, like the electron's mass, though it gets tricky for quarks and QCD. Perhaps Jester might blog on that issue some time.

That issue aside, we could estimate parameter values for BSM particles from these results, though doing so is rather model-dependent. For a Z' that connects the initial and final electrons, like a photon, we get ae = O(1)*g^2/(2pi)*(me/mz)^2 for electron-Z' interaction g, electron mass me, and Z' mass mz. For g ~ e, mz ~ 16 GeV for the electron, and 100 GeV for the muon. The electron's value especially is rather uncomfortably small.

Mad Hatter said...

Yes, if anything, g-2 hints at a new very light and very weakly coupled particle. For the ad-hoc model I discussed (the dotted green line in the plot) a good fit is obtained for the Z' coupling to electrons of order 10^-4.

Ervin Goldfain said...

"Yes, if anything, g-2 hints at a new very light and very weakly coupled particle."

Is it true, however, that there is no hint for Z' in all LHC data accumulated so far? Do you expect that Z' will show up at some point with additional fb^(-1)?

Loren said...

I stated "uncomfortably small" because both values are within accelerator energies, especially the electron's one. This leads to the question of how these particles escaped detection by other experiments. A possible solution is that their interactions mimic known particles' interactions too much. A Z' could be produced as bremsstrahlung, and it may then decay into a pair of electrons or muons. But that may be difficult to distinguish from a virtual photon doing all that. Furthermore, an electron Z' must be coupled much more weakly to muons than to electrons, to avoid driving the muon anomaly up by very much.

Turning to John Duffield's comments, I've found https://tf.nist.gov/general/pdf/2238.pdf -- Improved Limits on Variation of the Fine Structure Constant and Violation of Local Position Invariance -- this and other experiments provide strong upper limits to anomalous behavior of space-time (Lorentz violation, etc.), anomalous matter-gravity couplings, long-range "fifth forces", and other departures from general relativity. So gravitational effects can be ignored.

Mad Hatter said...

Loren: if the Z' has a small enough coupling to electrons it can avoid detection even when it's light. Look e.g. at Fig. 1 of http://arxiv.org/abs/1609.09072 : O(10 MeV) Z' with O(10^-4) couplings are allowed, which is the right ballpark to fit the electron g-2 for this mass range. The small mass is not a showstopper, although it does require some clever model building.
Ervin: LHC is good in excluding weak scale m~100 GeV and weakly coupled g>~ 10^-2 particles, but it's less efficient for very light and very weakly coupled particles. For that, low energy facilities may have a better shot.

Ervin Goldfain said...

"For that, low energy facilities may have a better shot."

True in general, but not everyone seems to agree, see e.g.

https://arxiv.org/abs/1406.2980

https://physics.aps.org/synopsis-for/10.1103/PhysRevLett.119.131804

Mic F said...

The 3.5 discrepancy in the muon g-2 became 4.5 in your plot ... optimism? MF

Mad Hatter said...

My bad, in the plot I forgot to take into account the theoretical uncertainty on the muon g-2 prediction. It's fixed, thx.

Unknown said...

Jester, how should one read the last sentence? If only for the argument's sake, (at least) one of the anomalies is confirmed to 5 sigma, would it mean iron proof of BSM, or still conditional on further theoretical work, that is, the error from higher loop corrections?

Mad Hatter said...

For the muon g-2 the biggest issue seems to be not the higher loop corrections but rather non-perturbative QCD contributions. In this case the input from future lattice calculations may be more crucial than an eventual increased statistical significance. The electron g-2 seems to be on the more firm ground theoretically (though the 5 loop QED calculation should be verified independently at some point). In this case a 5 sigma statistical significance would be more readily interpreted as new physics. All in all, the interpretation is a bit subjective, but we probably need some combination of lattice+good model+more sigmas to be totally convinced.