Thursday, 8 April 2021

Why is it when something happens it is ALWAYS you, muons?

April 7, 2021 was like a good TV episode: high-speed action, plot twists, and a cliffhanger ending. We now know that the strength of the little magnet inside the muon is described by the g-factor: 

g = 2.00233184122(82).

Any measurement of basic properties of matter is priceless, especially when it come with this incredible precision.  But for a particle physicist the main source of excitement is that this result could herald the breakdown of the Standard Model. The point is that the g-factor or the magnetic moment of an elementary particle can be calculated theoretically to a very good accuracy. Last year, the white paper of the Muon g−2 Theory Initiative came up with the consensus value for the Standard Model prediction 

                                                                      g = 2.00233183620(86), 

which is significantly smaller than the experimental value.  The discrepancy is estimated at 4.2 sigma, assuming the theoretical error is Gaussian and combining the errors in quadrature. 

As usual, when we see an experiment and the Standard Model disagree, these 3 things come to mind first

  1.  Statistical fluctuation. 
  2.  Flawed theory prediction. 
  3.  Experimental screw-up.   

The odds for 1. are extremely low in this case.  3. is not impossible but unlikely as of April 7. Basically the same experiment was repeated twice, first in Brookhaven 20 years ago, and now in Fermilab, yielding very consistent results. One day it would be nice to get an independent confirmation using alternative experimental techniques, but we are not losing any sleep over it. It is fair to say, however,  that 2. is not yet written off by most of the community. The process leading to the Standard Model prediction is of enormous complexity. It combines technically challenging perturbative calculations (5-loop QED!), data-driven methods, and non-perturbative inputs from dispersion relations, phenomenological models, and lattice QCD. One especially difficult contribution to evaluate is due to loops of light hadrons (pions etc.) affecting photon propagation.  In the white paper,  this hadronic vacuum polarization is related by theoretical tricks to low-energy electron scattering and determined from experimental data. However, the currently most precise lattice evaluation of the same quantity gives a larger value that would take the Standard Model prediction closer to the experiment. The lattice paper first appeared a year ago but only now was published in Nature in a well-timed move that can be compared to an ex crashing a wedding party. The theory and experiment are now locked in a three-way duel, and we are waiting for the shootout to see which theoretical prediction survives. Until this controversy is resolved, there will be a cloud of doubt hanging over every interpretation of the muon g-2 anomaly.   

  But let us assume for a moment that the white paper value is correct. This would be huge, as it would mean that the Standard Model does not fully capture how muons interact with light. The correct interaction Lagrangian would have to be (pardon my Greek)

The first term is the renormalizable minimal coupling present in the Standard Model, which gives the Coulomb force and all the usual electromagnetic phenomena. The second term is called the magnetic dipole. It leads to a small shift of the muon g-factor, so as to explain the Brookhaven and Fermilab measurements.  This is a non-renormalizable interaction, and so it must be an effective description of virtual effects of some new particle from beyond the Standard Model. Theorists have invented countless models for this particle in order to address the old Brookhaven measurement, and the Fermilab update changes little in this enterprise. I will write about it another time.  For now, let us just crunch some numbers to highlight one general feature. Even though the scale suppressing the effective dipole operator is in the EeV range, there are indications that the culprit particle is much lighter than that. First, electroweak gauge invariance forces it to be less than ~100 TeV in a rather model-independent way.  Next, in many models contributions to muon g-2 come with the chiral suppression proportional to the muon mass. Moreover, they typically appear at one loop, so the operator will pick up a loop suppression factor unless the new particle is strongly coupled.  The same dipole operator as above can be more suggestively recast as  

The scale 300 GeV appearing in the denominator indicates that the new particle should be around the corner!  Indeed, the discrepancy between the theory and experiment is larger than the contribution of the W and Z bosons to the muon g-2, so it seems logical to put the new particle near the electroweak scale. That's why the stakes of the April 7 Fermilab announcement are so enormous. If the gap between the Standard Model and experiment is real, the new particles and forces responsible for it should be within reach of the present or near-future colliders. This would open a new experimental era that is almost too beautiful to imagine. And for theorists, it would bring new pressing questions about who ordered it. 

Thursday, 1 April 2021

April Fools'21: Trouble with g-2

On April 7, the g-2 experiment at Fermilab was supposed to reveal their new measurement of the magnetic moment of the muon.  *Was*, because the announcement may be delayed for the most bizarre reason. You may have heard that the data are blinded to avoid biasing the outcome. This is now standard practice, but the g-2 collaboration went further: they are unable to unblind the data by themselves, to make sure that there is no leaks or temptations. Instead, the unblinding procedure requires an input from an external person, who is one of the Fermilab theorists. How does this work? The experiment measures the frequency of precession of antimuons circulating in a ring. From that and the known magnetic field the sought fundamental quantity - the magnetic moment of the muon, or g-2 in short - can be read off.  However, the whole analysis chain is performed using a randomly chosen number instead of the true clock frequency. Only at the very end, once all statistical and systematic errors are determined,  the true frequency is inserted and the final result is uncovered. For that last step they need to type the secret code into this machine looking like something from a 60s movie: 

The code was picked by the Fermilab theorist, and he is the only person to know it.  There is the rub... this theorist now refuses to give away the code.  It is not clear why. One time he said he had forgotten the envelope with the code on a train, another time he said the dog had eaten it. For the last few days he has locked himself in his home and completely stopped taking any calls. 

The situation is critical. PhD students from the collaboration are working round the clock to crack the code. They are basically trying out all possible combinations, but the process is painstakingly slow and may take months, delaying the long-expected announcement.  The collaboration even got a permission from the Fermilab director to search the office of the said theorist.  But they only found this piece of paper behind the bookshelf: 

It may be that the paper holds a clue about the code. If you have any idea what the code may be please email fermilab@fnal.gov or just write it in the comments below. 


Update: a part of this post (but strangely enough not all) is an April Fools joke. The new g-2 results are going to be presented on April 7, 2021, as planned.  The code is OPE, which stands for "operator product expansion", which is an  important technique used in the theoretical calculation of hadronic corrections to muon g-2: 



Monday, 29 March 2021

Thoughts on RK

The hashtag #CautiouslyExcited is trending on Twitter, in spite of the raging plague. The updated RK measurement in LHCb has made a big splash and has been covered by every news outlet.  RK measures the ratio of the B->Kμμ and B->Kee decay probabilities, which the Standard Model predicts to be very close to one. Using all the data collected so far, LHCb instead finds RK = 0.846 with the error of 0.044. This is the same central value and 30% smaller error compared to their 2019 result based on half of the data.  Mathematically speaking, the update does not much change the global picture of the B-meson anomalies. However, it has an important psychological impact, which goes beyond the PR story of crossing the 3 sigma threshold. Let me explain why. 

For the last few decades, every deviation from the Standard Model prediction in a particle collider experiment would mean one of these 3 things:    

  1. Statistical fluctuation. 
  2. Flawed theory prediction. 
  3. Experimental screw-up.   

In the case of RK, the option 2. is not a worry.  Yes, flavor physics is a swamp full of snake pits, however in the RK ratio the dangerous hadronic uncertainties cancel out to a large extent, so that precise theoretical predictions are possible.  Before March 23 the biggest worry was option 1.  Indeed, 2-3 sigma fluctuations happen all the time at the LHC, due to a huge number of measurements being taken.  However, you expect statistical fluctuations to decrease in significance as more data is collected.  This is what seems to be happening to the sister RD anomaly, and the earlier history of RK was not very encouraging either (in the 2019 update the significance neither increased nor decreased).  The fact that, this time, the significance of the RK anomaly increased, more or less as you would expect it to assuming it is a genuine new physics signal, makes it unlikely that it is merely a statistical fluctuation.  This is the main reason for the excitement you may perceive among particle physicists these days. 

On the other hand,  option 3. remains a possibility.  In their analysis,  LHCb reconstructed 3850 B->Kμμ decays vs. 1640 B->Kee decays, but from that they concluded that decays to muons are less probable than those to electrons. This is because one has to take into account the different reconstruction efficiencies for muons and electrons. An estimate of that efficiency is the most difficult ingredient of the measurement,  and the LHCb folks have spent many nights of heavy drinking worrying about it. Of course, they have made multiple cross-checks and are quite confident that there is no mistake but... there will always be a shadow of a doubt until RK is confirmed by an independent experiment. Fortunately for everyone, a verification will be provided by the Belle-II experiment, probably in 3-4 years from now. Only when Belle-II sees the same thing we will breathe a sigh of relief and put all our money on option

4. Physics beyond the Standard Model 

From that point of view explaining the RK measurement is trivial.  All we need is to add a new kind of interaction between b- and s-quarks and muons to the Standard Model Lagrangian.  For example, this 4-fermion contact term will do: 

where Q3=(t,b), Q2=(c,s), L2=(νμ,μ). The Standard Model won't let you have this interaction because it violates one of its founding principles: renormalizability.  But we know that the Standard Model is just an effective theory, and that non-renormalizable interactions must exist in nature, even if they are very suppressed so as to be unobservable most of the time.  In particular, neutrino oscillations are best explained by certain dimension-5 non-renormalizable interactions.  RK may be the first evidence that also dimension-6 non-renormalizable interactions exist in nature.  The nice thing is that the interaction term above 1) does not violate any existing experimental constraints,  2) explains not only RK but also some other 2-3 sigma tensions in the data (RK*, P5'),  and 3) fits well with some smaller 1-2 sigma effects (Bs->μμ, RpK,...). The existence of a simple theoretical explanation and a consistent pattern in the data is the other element that prompts cautious optimism.  

The LHC run-3 is coming soon, and with it more data on RK.  In the shorter perspective (less than a year?) there will be other important updates (RK*, RpK) and new observables (Rϕ , RK*+) probing the same physics. Finally something to wait for.   

Saturday, 1 August 2020

Death of a forgotten anomaly

Anomalies come with a big splash, but often go down quietly. A recent ATLAS measurement, just posted on arXiv, killed a long-standing and by now almost forgotten anomaly from the LEP collider.  LEP was an electron-positron collider operating some time in the late Holocene. Its most important legacy is the very precise measurements of the interaction strength between the Z boson and matter, which to this day are unmatched in accuracy. In the second stage of the experiment, called LEP-2, the collision energy was gradually raised to about 200 GeV, so that pairs of W bosons could be produced. The experiment was able to measure the branching fractions for W decays into electrons, muons, and tau leptons.  These are precisely predicted by the Standard Model: they should be equal to 10.8%, independently of the flavor of the lepton (up to a very small correction due to the lepton masses).  However, LEP-2 found 

Br(W → τν)/Br(W → eν) = 1.070 ± 0.029,     Br(W → τν)/Br(W → μν) = 1.076 ± 0.028.

While the decays to electrons and muons conformed very well to the Standard Model predictions, 
there was a 2.8 sigma excess in the tau channel. The question was whether it was simply a statistical fluctuation or new physics violating the Standard Model's sacred principle of lepton flavor universality. The ratio Br(W → τν)/Br(W → eν) was later measured at the Tevatron, without finding any excess, however the errors were larger. More recently, there have been hints of large lepton flavor universality violation in B-meson decays, so it was not completely crazy to think that the LEP-2 excess was a part of the same story.  

The solution came 20 years later LEP-2: there is no large violation of lepton flavor universality in W boson decays. The LHC has already produced hundreds million of top quarks, and each of them (as far as we know) creates a W boson in the process of its disintegration. ATLAS used this big sample to compare the W boson decay rate to taus and to muons. Their result: 

Br(W → τν)/Br(W → μν) = 0.992 ± 0.013.

There is no slightest hint of an excess here. But what is most impressive is that the error is smaller,  by more than a factor of two, than in LEP-2! After the W boson mass, this is another precision measurement where a dirty hadron collider environment achieves a better accuracy than an electron-positron machine. 
Yes, more of that :)   

Thanks to the ATLAS measurement, our knowledge of the W boson couplings has increased substantially, as shown in the picture (errors are 1 sigma): 


The current uncertainty is a few per mille. This is still worse than for the Z boson couplings to leptons, in which case the accuracy is better than per mille, but we're getting there... Within the present accuracy, the W boson couplings to all leptons are consistent with the Standard Model prediction, and with lepton flavor universality in particular. Some tensions appearing in earlier global fits are all gone. The Standard Model wins again, nothing to see here, we can move on to the next anomaly. 

Wednesday, 17 June 2020

Hail the XENON excess

Where were we...  It's been years since particle physics last made an exciting headline. The result announced today by the XENON collaboration is a welcome breath of fresh air. It's too early to say whether it heralds a real breakthrough, or whether it's another bubble to be burst. But it certainly gives food for thought for particle theorists, enough to keep hep-ph going for the next few months.

The XENON collaboration was operating a 1-ton xenon detector in an underground lab in Italy. Originally, this line of experiments was devised to search for hypothetical heavy particles constituting dark matter, so called WIMPs. For that they offer a basically background-free environment, where a signal of dark matter colliding with xenon nuclei would stand out like a lighthouse. However all WIMP searches so far have returned zero, null, and nada. Partly out of boredom and despair, the xenon-based collaborations began thinking out-of-the-box to find out what else their shiny instruments could be good for. One idea was to search for axions. These are hypothetical superlight and superweakly interacting particles, originally devised to plug a certain theoretical hole in the Standard Model of particle physics. If they exist, they should be copiously produced in the core of the Sun with energies of order a keV. This is too little to perceptibly knock an atomic nucleus, as xenon weighs over a hundred GeV. However, many variants of the axion scenario, in particular the popular DFSZ model, predicts axions interacting with electrons. Then a keV axion may occasionally hit the cloud of electrons orbiting xenon atoms, sending one to an excited level or ionizing the atom. These electron-recoil events can be identified principally by the ratio of ionization and scintillation signals, which is totally different than for WIMP-like nuclear recoils. This is no longer a background-free search, as radioactive isotopes present inside the detector may lead to the same signal. Therefore collaboration have to search for a peak of electron-recoil events at keV energies.     

This is what they saw in the XENON1t data
Energy spectrum of electron-recoil events measured by the XENON1T experiment. 
The expected background is approximately flat from 30 keV down to the detection threshold at 1 keV, below which it falls off abruptly. On the other hand, the data seem to show a signal component growing towards low energies, and possibly peaking at 1-2 keV. Concentrating on the 1-7 keV range (so with a bit of cherry-picking), 285 events is observed in the data compared to an expected 232 events from the background-only fit. In purely statistical terms, this is a 3.5 sigma excess.

Assuming it's new physics, what does this mean? XENON shows that there is a flux of light relativistic particles arriving into their detector.  The peak of the excess corresponds to the temperature in the core of the Sun (15 million kelvin = 1.3 keV), so our star is a natural source of these particles (but at this point XENON cannot prove they arrive from the Sun). Furthermore, the  particles must couple to electrons, because they can knock xenon's electrons off their orbits. Several theoretical models contain particles matching that description. Axions are the primary suspects, because today they are arguably the best motivated extension of the Standard Model. They are naturally light, because their mass is protected by built-in symmetries, and for the same reason their coupling to matter must be extremely suppressed.  For QCD axions the defining feature is their coupling to gluons, but in generic constructions one also finds the pseudoscalar-type interaction between the axion and electrons e:

To explain the excess, one needs the coupling g to be of order 10^-12, which is totally natural in this context. But axions are by no means the only possibility. A related option is the dark photon, which differs from the axion by certain technicalities, in particular it has spin-1 instead of spin-0. The palette of viable models is certainly much broader, with  the details to be found soon on arXiv.           

A distinct avenue to explain the XENON excess is neutrinos. Here, the advantage is that we already know that neutrinos exist, and that the Sun emits some 10^38 of them every second. In fact, the background model used by XENON includes 220 neutrino-induced events in the 1-210 keV range.
However, in the standard picture, the interactions of neutrinos with electrons are too weak to explain the excess. To that end one has to either increase their flux (so fiddle with the solar model), or to increase their interaction strength with matter (so go beyond the Standard Model). For example, neutrinos could interact with electrons via a photon intermediary. While neutrinos do not have an electric charge, uncharged particles can still couple to photons via dipole or higher-multipole moments. It is possible that new physics (possibly the same that generates the neutrino masses) also pumps up the neutrino magnetic dipole moment. This can be described in a model-independent way by adding a non-renormalizable dimension-7 operator to the Standard Model, e.g.
   
To explain the XENON excess we need d of order 10^-6. That mean new physics responsible for the dipole moment must be just behind the corner, below 100 TeV or so.

How confident should we be that it's new physics? Experience has shown again and again that anomalies in new physics searches have, with a very large confidence, a mundane origin that does not involve exotic particles or interactions.  In this case, possible explanations are, in order of likelihood,  1) small contamination of the detector, 2) some other instrumental effect that the collaboration hasn't thought of, 3) the ghost of Roberto Peccei, 4) a genuine signal of new physics. In fact, the collaboration itself is hedging for the first option, as they cannot exclude the presence of a small amount of  tritium in the detector, which would produce a signal similar to the observed excess. Moreover, there are a few orange flags for the new physics interpretation:
  1.  Simplest models explaining the excess are excluded by astrophysical observations. If axions can be produced in the Sun at the rate suggested by the XENON result, they can be produced at even larger rates in hotter stars, e.g. in red giants or white dwarfs. This would lead to excessive cooling of these stars, in conflict with observations. The upper limit on the axion-electron coupling g from red giants is 3*10^-13, which is an order of magnitude  less than what is needed for the XENON excess.  The neutrino magnetic moment explanations faces a similar difficulty. Of course, astrophysical limits reside in a different epistemological reality; it is not unheard of that they are relaxed by an order of magnitude or disappear completely. But certainly this is something to worry about.  
  2.  At a more psychological level, a small excess over a large background near a detection threshold.... sounds familiar. We've seen that before in the case of the DAMA and CoGeNT dark matter experiments, at it didn't turn out well.     
  3. The bump is at 1.5 keV, which is *twice* 750 eV.  
So, as usual, more data, time, and patience is needed to verify the new physics hypothesis. On the experimental side, the near future is very optimistic, with the XENONnT, LUX-ZEPLIN, and PandaX-4T experiments all jostling for position to confirm the excess and earn eternal glory. On the theoretical side, the big question is whether the stellar cooling constraints can be avoided, without too many epicycles. It would be also good to know whether the particle responsible for the XENON excess could be related to dark matter and/or to other existing anomalies, in particular to the B-meson ones. For answers, tune in to arXiv, from tomorrow on. 

Wednesday, 20 June 2018

Both g-2 anomalies

Two months ago an experiment in Berkeley announced a new ultra-precise measurement of the fine structure constant α using interferometry techniques. This wasn't much noticed because the paper is not on arXiv, and moreover this kind of research is filed under metrology, which is easily confused with meteorology. So it's worth commenting on why precision measurements of α could be interesting for particle physics. What the Berkeley group really did was to measure the mass of the cesium-133 atom, achieving the relative accuracy of 4*10^-10, that is 0.4 parts par billion (ppb). With that result in hand, α can be determined after a cavalier rewriting of the high-school formula for the Rydberg constant:   
Everybody knows the first 3 digits of the Rydberg constant, Ry≈13.6 eV, but actually it is experimentally known with the fantastic accuracy of 0.006 ppb, and the electron-to-atom mass ratio has also been determined precisely. Thus the measurement of the cesium mass can be translated into a 0.2 ppb measurement of the fine structure constant: 1/α=137.035999046(27).

You may think that this kind of result could appeal only to a Pythonesque chartered accountant. But you would be wrong. First of all, the new result excludes  α = 1/137 at 1 million sigma, dealing a mortal blow to the field of epistemological numerology. Perhaps more importantly, the result is relevant for testing the Standard Model. One place where precise knowledge of α is essential is in calculation of the magnetic moment of the electron. Recall that the g-factor is defined as the proportionality constant between the magnetic moment and the angular momentum. For the electron we have
Experimentally, ge is one of the most precisely determined quantities in physics,  with the most recent measurement quoting a= 0.00115965218073(28), that is 0.0001 ppb accuracy on ge, or 0.2 ppb accuracy on ae. In the Standard Model, ge is calculable as a function of α and other parameters. In the classical approximation ge=2, while the one-loop correction proportional to the first power of α was already known in prehistoric times thanks to Schwinger. The dots above summarize decades of subsequent calculations, which now include O(α^5) terms, that is 5-loop QED contributions! Thanks to these heroic efforts (depicted in the film  For a Few Diagrams More - a sequel to Kurosawa's Seven Samurai), the main theoretical uncertainty for the Standard Model prediction of ge is due to the experimental error on the value of α. The Berkeley measurement allows one to reduce the relative theoretical error on adown to 0.2 ppb:  ae = 0.00115965218161(23), which matches in magnitude the experimental error and improves by a factor of 3 the previous prediction based on the α measurement with rubidium atoms.

At the spiritual level, the comparison between the theory and experiment provides an impressive validation of quantum field theory techniques up to the 13th significant digit - an unimaginable  theoretical accuracy in other branches of science. More practically, it also provides a powerful test of the Standard Model. New particles coupled to the electron may contribute to the same loop diagrams from which ge is calculated, and could shift the observed value of ae away from the Standard Model predictions. In many models, corrections to the electron and muon magnetic moments are correlated. The latter famously deviates from the Standard Model prediction by 3.5 to 4 sigma, depending on who counts the uncertainties. Actually, if you bother to eye carefully the experimental and theoretical values of ae beyond the 10th significant digit you can see that they are also discrepant, this time at the 2.5 sigma level. So now we have two g-2 anomalies! In a picture, the situation can be summarized as follows:

If you're a member of the Holy Church of Five Sigma you can almost preach an unambiguous discovery of physics beyond the Standard Model. However, for most of us this is not the case yet. First, there is still some debate about the theoretical uncertainties entering the muon g-2 prediction. Second, while it is quite easy to fit each of the two anomalies separately, there seems to be no appealing model to fit both of them at the same time.  Take for example the very popular toy model with a new massive spin-1 Z' boson (aka the dark photon) kinetically mixed with the ordinary photon. In this case Z' has, much like the ordinary photon, vector-like and universal couplings to electron and muons. But this leads to a positive contribution to g-2, and it does not fit well the ae measurement which favors a new negative contribution. In fact, the ae measurement provides the most stringent constraint in part of the parameter space of the dark photon model. Conversely, a Z' boson with purely axial couplings to matter does not fit the data as it gives a negative contribution to g-2, thus making the muon g-2 anomaly worse. What might work is a hybrid model with a light Z' boson having lepton-flavor violating interactions: a vector coupling to muons and a somewhat smaller axial coupling to electrons. But constructing a consistent and realistic model along these lines is a challenge because of other experimental constraints (e.g. from the lack of observation of μ→eγ decays). Some food for thought can be found in this paper, but I'm not sure if a sensible model exists at the moment. If you know one you are welcome to drop a comment here or a paper on arXiv.

More excitement on this front is in store. The muon g-2 experiment in Fermilab should soon deliver first results which may confirm or disprove the muon anomaly. Further progress with the electron g-2 and fine-structure constant measurements is also expected in the near future. The biggest worry is that, if the accuracy improves by another two orders of magnitude, we will need to calculate six loop QED corrections... 

Tuesday, 5 June 2018

Can MiniBooNE be right?

The experimental situation in neutrino physics is confusing. One one hand, a host of neutrino experiments has established a consistent picture where the neutrino mass eigenstates are mixtures of the 3 Standard Model neutrino flavors νe, νμ, ντ. The measured mass differences between the eigenstates are Δm12^2 ≈ 7.5*10^-5 eV^2 and Δm13^2 ≈ 2.5*10^-3 eV^2, suggesting that all Standard Model neutrinos have masses below 0.1 eV. That is well in line with cosmological observations which find that the radiation budget of the early universe is consistent with the existence of exactly 3 neutrinos with the sum of the masses less than 0.2 eV. On the other hand, several rogue experiments refuse to conform to the standard 3-flavor picture. The most severe anomaly is the appearance of electron neutrinos in a muon neutrino beam observed by the LSND and MiniBooNE experiments.


This story begins in the previous century with the LSND experiment in Los Alamos, which claimed to observe νμνe antineutrino oscillations with 3.8σ significance.  This result was considered controversial from the very beginning due to limitations of the experimental set-up. Moreover, it was inconsistent with the standard 3-flavor picture which, given the masses and mixing angles measured by other experiments, predicted that νμνe oscillation should be unobservable in short-baseline (L ≼ km) experiments. The MiniBooNE experiment in Fermilab was conceived to conclusively prove or disprove the LSND anomaly. To this end, a beam of mostly muon neutrinos or antineutrinos with energies E~1 GeV is sent to a detector at the distance L~500 meters away. In general, neutrinos can change their flavor with the probability oscillating as P ~ sin^2(Δm^2 L/4E). If the LSND excess is really due to neutrino oscillations, one expects to observe electron neutrino appearance in the MiniBooNE detector given that L/E is similar in the two experiments. Originally, MiniBooNE was hoping to see a smoking gun in the form of an electron neutrino excess oscillating as a function of L/E, that is peaking at intermediate energies and then decreasing towards lower energies (possibly with several wiggles). That didn't happen. Instead, MiniBooNE finds an excess increasing towards low energies with a similar shape as the backgrounds. Thus the confusion lingers on: the LSND anomaly has neither been killed nor robustly confirmed.     

In spite of these doubts, the LSND and MiniBooNE anomalies continue to arouse interest. This is understandable: as the results do not fit the 3-flavor framework, if confirmed they would prove the existence of new physics beyond the Standard Model. The simplest fix would be to introduce a sterile neutrino νs with the mass in the eV ballpark, in which case MiniBooNE would be observing the νμνsνe oscillation chain. With the recent MiniBooNE update the evidence for the electron neutrino appearance increased to 4.8σ, which has stirred some commotion on Twitter and in the blogosphere. However, I find the excitement a bit misplaced. The anomaly is not really new: similar results showing a 3.8σ excess of νe-like events were already published in 2012.  The increase of the significance is hardly relevant: at this point we know anyway that the excess is not a statistical fluke, while a systematic effect due to underestimated backgrounds would also lead to a growing anomaly. If anything, there are now less reasons than in 2012 to believe in the sterile neutrino origin the MiniBooNE anomaly, as I will argue in the following.

What has changed since 2012? First, there are new constraints on νe appearance from the OPERA experiment (yes, this OPERA) who did not see any excess νe in the CERN-to-Gran-Sasso νμ beam. This excludes a large chunk of the relevant parameter space corresponding to large mixing angles between the active and sterile neutrinos. From this point of view, the MiniBooNE update actually adds more stress on the sterile neutrino interpretation by slightly shifting the preferred region towards larger mixing angles...  Nevertheless, a not-too-horrible fit to all appearance experiments can still be achieved in the region with Δm^2~0.5 eV^2 and the mixing angle sin^2(2θ) of order 0.01.     

Next, the cosmological constraints have become more stringent. The CMB observations by the Planck satellite do not leave room for an additional neutrino species in the early universe. But for the parameters preferred by LSND and MiniBooNE, the sterile neutrino would be abundantly produced in the hot primordial plasma, thus violating the Planck constraints. To avoid it, theorists need to deploy a battery of  tricks (for example, large sterile-neutrino self-interactions), which makes realistic models rather baroque.

But the killer punch is delivered by disappearance analyses. Benjamin Franklin famously said that only two things in this world were certain: death and probability conservation. Thus whenever an electron neutrino appears in a νμ beam, a muon neutrino must disappear. However, the latter process is severely constrained by long-baseline neutrino experiments, and recently the limits have been further strengthened thanks to the MINOS and IceCube collaborations. A recent combination of the existing disappearance results is available in this paper.  In the 3+1 flavor scheme, the probability of a muon neutrino transforming into an electron  one in a short-baseline experiment is
where U is the 4x4 neutrino mixing matrix.  The Uμ4 matrix elements controls also the νμ survival probability
The νμ disappearance data from MINOS and IceCube imply |Uμ4|≼0.1, while |Ue4|≼0.25 from solar neutrino observations. All in all, the disappearance results imply that the effective mixing angle sin^2(2θ) controlling the νμνsνe oscillation must be much smaller than 0.01 required to fit the MiniBooNE anomaly. The disagreement between the appearance and disappearance data had already existed before, but was actually made worse by the MiniBooNE update.
So the hypothesis of a 4th sterile neutrino does not stand scrutiny as an explanation of the MiniBooNE anomaly. It does not mean that there is no other possible explanation (more sterile neutrinos? non-standard interactions? neutrino decays?). However, any realistic model will have to delve deep into the crazy side in order to satisfy the constraints from other neutrino experiments, flavor physics, and cosmology. Fortunately, the current confusing situation should not last forever. The MiniBooNE photon background from π0 decays may be clarified by the ongoing MicroBooNE experiment. On the timescale of a few years the controversy should be closed by the SBN program in Fermilab, which will add one near and one far detector to the MicroBooNE beamline. Until then... years of painful experience have taught us to assign a high prior to the Standard Model hypothesis. Currently, by far the most plausible explanation of the existing data is an experimental error on the part of the MiniBooNE collaboration.