Saturday, 1 August 2020

Death of a forgotten anomaly

Anomalies come with a big splash, but often go down quietly. A recent ATLAS measurement, just posted on arXiv, killed a long-standing and by now almost forgotten anomaly from the LEP collider.  LEP was an electron-positron collider operating some time in the late Holocene. Its most important legacy is the very precise measurements of the interaction strength between the Z boson and matter, which to this day are unmatched in accuracy. In the second stage of the experiment, called LEP-2, the collision energy was gradually raised to about 200 GeV, so that pairs of W bosons could be produced. The experiment was able to measure the branching fractions for W decays into electrons, muons, and tau leptons.  These are precisely predicted by the Standard Model: they should be equal to 10.8%, independently of the flavor of the lepton (up to a very small correction due to the lepton masses).  However, LEP-2 found 

Br(W → τν)/Br(W → eν) = 1.070 ± 0.029,     Br(W → τν)/Br(W → μν) = 1.076 ± 0.028.

While the decays to electrons and muons conformed very well to the Standard Model predictions, 
there was a 2.8 sigma excess in the tau channel. The question was whether it was simply a statistical fluctuation or new physics violating the Standard Model's sacred principle of lepton flavor universality. The ratio Br(W → τν)/Br(W → eν) was later measured at the Tevatron, without finding any excess, however the errors were larger. More recently, there have been hints of large lepton flavor universality violation in B-meson decays, so it was not completely crazy to think that the LEP-2 excess was a part of the same story.  

The solution came 20 years later LEP-2: there is no large violation of lepton flavor universality in W boson decays. The LHC has already produced hundreds million of top quarks, and each of them (as far as we know) creates a W boson in the process of its disintegration. ATLAS used this big sample to compare the W boson decay rate to taus and to muons. Their result: 

Br(W → τν)/Br(W → μν) = 0.992 ± 0.013.

There is no slightest hint of an excess here. But what is most impressive is that the error is smaller,  by more than a factor of two, than in LEP-2! After the W boson mass, this is another precision measurement where a dirty hadron collider environment achieves a better accuracy than an electron-positron machine. 
Yes, more of that :)   

Thanks to the ATLAS measurement, our knowledge of the W boson couplings has increased substantially, as shown in the picture (errors are 1 sigma): 


The current uncertainty is a few per mille. This is still worse than for the Z boson couplings to leptons, in which case the accuracy is better than per mille, but we're getting there... Within the present accuracy, the W boson couplings to all leptons are consistent with the Standard Model prediction, and with lepton flavor universality in particular. Some tensions appearing in earlier global fits are all gone. The Standard Model wins again, nothing to see here, we can move on to the next anomaly. 

Wednesday, 17 June 2020

Hail the XENON excess

Where were we...  It's been years since particle physics last made an exciting headline. The result announced today by the XENON collaboration is a welcome breath of fresh air. It's too early to say whether it heralds a real breakthrough, or whether it's another bubble to be burst. But it certainly gives food for thought for particle theorists, enough to keep hep-ph going for the next few months.

The XENON collaboration was operating a 1-ton xenon detector in an underground lab in Italy. Originally, this line of experiments was devised to search for hypothetical heavy particles constituting dark matter, so called WIMPs. For that they offer a basically background-free environment, where a signal of dark matter colliding with xenon nuclei would stand out like a lighthouse. However all WIMP searches so far have returned zero, null, and nada. Partly out of boredom and despair, the xenon-based collaborations began thinking out-of-the-box to find out what else their shiny instruments could be good for. One idea was to search for axions. These are hypothetical superlight and superweakly interacting particles, originally devised to plug a certain theoretical hole in the Standard Model of particle physics. If they exist, they should be copiously produced in the core of the Sun with energies of order a keV. This is too little to perceptibly knock an atomic nucleus, as xenon weighs over a hundred GeV. However, many variants of the axion scenario, in particular the popular DFSZ model, predicts axions interacting with electrons. Then a keV axion may occasionally hit the cloud of electrons orbiting xenon atoms, sending one to an excited level or ionizing the atom. These electron-recoil events can be identified principally by the ratio of ionization and scintillation signals, which is totally different than for WIMP-like nuclear recoils. This is no longer a background-free search, as radioactive isotopes present inside the detector may lead to the same signal. Therefore collaboration have to search for a peak of electron-recoil events at keV energies.     

This is what they saw in the XENON1t data
Energy spectrum of electron-recoil events measured by the XENON1T experiment. 
The expected background is approximately flat from 30 keV down to the detection threshold at 1 keV, below which it falls off abruptly. On the other hand, the data seem to show a signal component growing towards low energies, and possibly peaking at 1-2 keV. Concentrating on the 1-7 keV range (so with a bit of cherry-picking), 285 events is observed in the data compared to an expected 232 events from the background-only fit. In purely statistical terms, this is a 3.5 sigma excess.

Assuming it's new physics, what does this mean? XENON shows that there is a flux of light relativistic particles arriving into their detector.  The peak of the excess corresponds to the temperature in the core of the Sun (15 million kelvin = 1.3 keV), so our star is a natural source of these particles (but at this point XENON cannot prove they arrive from the Sun). Furthermore, the  particles must couple to electrons, because they can knock xenon's electrons off their orbits. Several theoretical models contain particles matching that description. Axions are the primary suspects, because today they are arguably the best motivated extension of the Standard Model. They are naturally light, because their mass is protected by built-in symmetries, and for the same reason their coupling to matter must be extremely suppressed.  For QCD axions the defining feature is their coupling to gluons, but in generic constructions one also finds the pseudoscalar-type interaction between the axion and electrons e:

To explain the excess, one needs the coupling g to be of order 10^-12, which is totally natural in this context. But axions are by no means the only possibility. A related option is the dark photon, which differs from the axion by certain technicalities, in particular it has spin-1 instead of spin-0. The palette of viable models is certainly much broader, with  the details to be found soon on arXiv.           

A distinct avenue to explain the XENON excess is neutrinos. Here, the advantage is that we already know that neutrinos exist, and that the Sun emits some 10^38 of them every second. In fact, the background model used by XENON includes 220 neutrino-induced events in the 1-210 keV range.
However, in the standard picture, the interactions of neutrinos with electrons are too weak to explain the excess. To that end one has to either increase their flux (so fiddle with the solar model), or to increase their interaction strength with matter (so go beyond the Standard Model). For example, neutrinos could interact with electrons via a photon intermediary. While neutrinos do not have an electric charge, uncharged particles can still couple to photons via dipole or higher-multipole moments. It is possible that new physics (possibly the same that generates the neutrino masses) also pumps up the neutrino magnetic dipole moment. This can be described in a model-independent way by adding a non-renormalizable dimension-7 operator to the Standard Model, e.g.
   
To explain the XENON excess we need d of order 10^-6. That mean new physics responsible for the dipole moment must be just behind the corner, below 100 TeV or so.

How confident should we be that it's new physics? Experience has shown again and again that anomalies in new physics searches have, with a very large confidence, a mundane origin that does not involve exotic particles or interactions.  In this case, possible explanations are, in order of likelihood,  1) small contamination of the detector, 2) some other instrumental effect that the collaboration hasn't thought of, 3) the ghost of Roberto Peccei, 4) a genuine signal of new physics. In fact, the collaboration itself is hedging for the first option, as they cannot exclude the presence of a small amount of  tritium in the detector, which would produce a signal similar to the observed excess. Moreover, there are a few orange flags for the new physics interpretation:
  1.  Simplest models explaining the excess are excluded by astrophysical observations. If axions can be produced in the Sun at the rate suggested by the XENON result, they can be produced at even larger rates in hotter stars, e.g. in red giants or white dwarfs. This would lead to excessive cooling of these stars, in conflict with observations. The upper limit on the axion-electron coupling g from red giants is 3*10^-13, which is an order of magnitude  less than what is needed for the XENON excess.  The neutrino magnetic moment explanations faces a similar difficulty. Of course, astrophysical limits reside in a different epistemological reality; it is not unheard of that they are relaxed by an order of magnitude or disappear completely. But certainly this is something to worry about.  
  2.  At a more psychological level, a small excess over a large background near a detection threshold.... sounds familiar. We've seen that before in the case of the DAMA and CoGeNT dark matter experiments, at it didn't turn out well.     
  3. The bump is at 1.5 keV, which is *twice* 750 eV.  
So, as usual, more data, time, and patience is needed to verify the new physics hypothesis. On the experimental side, the near future is very optimistic, with the XENONnT, LUX-ZEPLIN, and PandaX-4T experiments all jostling for position to confirm the excess and earn eternal glory. On the theoretical side, the big question is whether the stellar cooling constraints can be avoided, without too many epicycles. It would be also good to know whether the particle responsible for the XENON excess could be related to dark matter and/or to other existing anomalies, in particular to the B-meson ones. For answers, tune in to arXiv, from tomorrow on. 

Wednesday, 20 June 2018

Both g-2 anomalies

Two months ago an experiment in Berkeley announced a new ultra-precise measurement of the fine structure constant α using interferometry techniques. This wasn't much noticed because the paper is not on arXiv, and moreover this kind of research is filed under metrology, which is easily confused with meteorology. So it's worth commenting on why precision measurements of α could be interesting for particle physics. What the Berkeley group really did was to measure the mass of the cesium-133 atom, achieving the relative accuracy of 4*10^-10, that is 0.4 parts par billion (ppb). With that result in hand, α can be determined after a cavalier rewriting of the high-school formula for the Rydberg constant:   
Everybody knows the first 3 digits of the Rydberg constant, Ry≈13.6 eV, but actually it is experimentally known with the fantastic accuracy of 0.006 ppb, and the electron-to-atom mass ratio has also been determined precisely. Thus the measurement of the cesium mass can be translated into a 0.2 ppb measurement of the fine structure constant: 1/α=137.035999046(27).

You may think that this kind of result could appeal only to a Pythonesque chartered accountant. But you would be wrong. First of all, the new result excludes  α = 1/137 at 1 million sigma, dealing a mortal blow to the field of epistemological numerology. Perhaps more importantly, the result is relevant for testing the Standard Model. One place where precise knowledge of α is essential is in calculation of the magnetic moment of the electron. Recall that the g-factor is defined as the proportionality constant between the magnetic moment and the angular momentum. For the electron we have
Experimentally, ge is one of the most precisely determined quantities in physics,  with the most recent measurement quoting a= 0.00115965218073(28), that is 0.0001 ppb accuracy on ge, or 0.2 ppb accuracy on ae. In the Standard Model, ge is calculable as a function of α and other parameters. In the classical approximation ge=2, while the one-loop correction proportional to the first power of α was already known in prehistoric times thanks to Schwinger. The dots above summarize decades of subsequent calculations, which now include O(α^5) terms, that is 5-loop QED contributions! Thanks to these heroic efforts (depicted in the film  For a Few Diagrams More - a sequel to Kurosawa's Seven Samurai), the main theoretical uncertainty for the Standard Model prediction of ge is due to the experimental error on the value of α. The Berkeley measurement allows one to reduce the relative theoretical error on adown to 0.2 ppb:  ae = 0.00115965218161(23), which matches in magnitude the experimental error and improves by a factor of 3 the previous prediction based on the α measurement with rubidium atoms.

At the spiritual level, the comparison between the theory and experiment provides an impressive validation of quantum field theory techniques up to the 13th significant digit - an unimaginable  theoretical accuracy in other branches of science. More practically, it also provides a powerful test of the Standard Model. New particles coupled to the electron may contribute to the same loop diagrams from which ge is calculated, and could shift the observed value of ae away from the Standard Model predictions. In many models, corrections to the electron and muon magnetic moments are correlated. The latter famously deviates from the Standard Model prediction by 3.5 to 4 sigma, depending on who counts the uncertainties. Actually, if you bother to eye carefully the experimental and theoretical values of ae beyond the 10th significant digit you can see that they are also discrepant, this time at the 2.5 sigma level. So now we have two g-2 anomalies! In a picture, the situation can be summarized as follows:

If you're a member of the Holy Church of Five Sigma you can almost preach an unambiguous discovery of physics beyond the Standard Model. However, for most of us this is not the case yet. First, there is still some debate about the theoretical uncertainties entering the muon g-2 prediction. Second, while it is quite easy to fit each of the two anomalies separately, there seems to be no appealing model to fit both of them at the same time.  Take for example the very popular toy model with a new massive spin-1 Z' boson (aka the dark photon) kinetically mixed with the ordinary photon. In this case Z' has, much like the ordinary photon, vector-like and universal couplings to electron and muons. But this leads to a positive contribution to g-2, and it does not fit well the ae measurement which favors a new negative contribution. In fact, the ae measurement provides the most stringent constraint in part of the parameter space of the dark photon model. Conversely, a Z' boson with purely axial couplings to matter does not fit the data as it gives a negative contribution to g-2, thus making the muon g-2 anomaly worse. What might work is a hybrid model with a light Z' boson having lepton-flavor violating interactions: a vector coupling to muons and a somewhat smaller axial coupling to electrons. But constructing a consistent and realistic model along these lines is a challenge because of other experimental constraints (e.g. from the lack of observation of μ→eγ decays). Some food for thought can be found in this paper, but I'm not sure if a sensible model exists at the moment. If you know one you are welcome to drop a comment here or a paper on arXiv.

More excitement on this front is in store. The muon g-2 experiment in Fermilab should soon deliver first results which may confirm or disprove the muon anomaly. Further progress with the electron g-2 and fine-structure constant measurements is also expected in the near future. The biggest worry is that, if the accuracy improves by another two orders of magnitude, we will need to calculate six loop QED corrections... 

Tuesday, 5 June 2018

Can MiniBooNE be right?

The experimental situation in neutrino physics is confusing. One one hand, a host of neutrino experiments has established a consistent picture where the neutrino mass eigenstates are mixtures of the 3 Standard Model neutrino flavors νe, νμ, ντ. The measured mass differences between the eigenstates are Δm12^2 ≈ 7.5*10^-5 eV^2 and Δm13^2 ≈ 2.5*10^-3 eV^2, suggesting that all Standard Model neutrinos have masses below 0.1 eV. That is well in line with cosmological observations which find that the radiation budget of the early universe is consistent with the existence of exactly 3 neutrinos with the sum of the masses less than 0.2 eV. On the other hand, several rogue experiments refuse to conform to the standard 3-flavor picture. The most severe anomaly is the appearance of electron neutrinos in a muon neutrino beam observed by the LSND and MiniBooNE experiments.


This story begins in the previous century with the LSND experiment in Los Alamos, which claimed to observe νμνe antineutrino oscillations with 3.8σ significance.  This result was considered controversial from the very beginning due to limitations of the experimental set-up. Moreover, it was inconsistent with the standard 3-flavor picture which, given the masses and mixing angles measured by other experiments, predicted that νμνe oscillation should be unobservable in short-baseline (L ≼ km) experiments. The MiniBooNE experiment in Fermilab was conceived to conclusively prove or disprove the LSND anomaly. To this end, a beam of mostly muon neutrinos or antineutrinos with energies E~1 GeV is sent to a detector at the distance L~500 meters away. In general, neutrinos can change their flavor with the probability oscillating as P ~ sin^2(Δm^2 L/4E). If the LSND excess is really due to neutrino oscillations, one expects to observe electron neutrino appearance in the MiniBooNE detector given that L/E is similar in the two experiments. Originally, MiniBooNE was hoping to see a smoking gun in the form of an electron neutrino excess oscillating as a function of L/E, that is peaking at intermediate energies and then decreasing towards lower energies (possibly with several wiggles). That didn't happen. Instead, MiniBooNE finds an excess increasing towards low energies with a similar shape as the backgrounds. Thus the confusion lingers on: the LSND anomaly has neither been killed nor robustly confirmed.     

In spite of these doubts, the LSND and MiniBooNE anomalies continue to arouse interest. This is understandable: as the results do not fit the 3-flavor framework, if confirmed they would prove the existence of new physics beyond the Standard Model. The simplest fix would be to introduce a sterile neutrino νs with the mass in the eV ballpark, in which case MiniBooNE would be observing the νμνsνe oscillation chain. With the recent MiniBooNE update the evidence for the electron neutrino appearance increased to 4.8σ, which has stirred some commotion on Twitter and in the blogosphere. However, I find the excitement a bit misplaced. The anomaly is not really new: similar results showing a 3.8σ excess of νe-like events were already published in 2012.  The increase of the significance is hardly relevant: at this point we know anyway that the excess is not a statistical fluke, while a systematic effect due to underestimated backgrounds would also lead to a growing anomaly. If anything, there are now less reasons than in 2012 to believe in the sterile neutrino origin the MiniBooNE anomaly, as I will argue in the following.

What has changed since 2012? First, there are new constraints on νe appearance from the OPERA experiment (yes, this OPERA) who did not see any excess νe in the CERN-to-Gran-Sasso νμ beam. This excludes a large chunk of the relevant parameter space corresponding to large mixing angles between the active and sterile neutrinos. From this point of view, the MiniBooNE update actually adds more stress on the sterile neutrino interpretation by slightly shifting the preferred region towards larger mixing angles...  Nevertheless, a not-too-horrible fit to all appearance experiments can still be achieved in the region with Δm^2~0.5 eV^2 and the mixing angle sin^2(2θ) of order 0.01.     

Next, the cosmological constraints have become more stringent. The CMB observations by the Planck satellite do not leave room for an additional neutrino species in the early universe. But for the parameters preferred by LSND and MiniBooNE, the sterile neutrino would be abundantly produced in the hot primordial plasma, thus violating the Planck constraints. To avoid it, theorists need to deploy a battery of  tricks (for example, large sterile-neutrino self-interactions), which makes realistic models rather baroque.

But the killer punch is delivered by disappearance analyses. Benjamin Franklin famously said that only two things in this world were certain: death and probability conservation. Thus whenever an electron neutrino appears in a νμ beam, a muon neutrino must disappear. However, the latter process is severely constrained by long-baseline neutrino experiments, and recently the limits have been further strengthened thanks to the MINOS and IceCube collaborations. A recent combination of the existing disappearance results is available in this paper.  In the 3+1 flavor scheme, the probability of a muon neutrino transforming into an electron  one in a short-baseline experiment is
where U is the 4x4 neutrino mixing matrix.  The Uμ4 matrix elements controls also the νμ survival probability
The νμ disappearance data from MINOS and IceCube imply |Uμ4|≼0.1, while |Ue4|≼0.25 from solar neutrino observations. All in all, the disappearance results imply that the effective mixing angle sin^2(2θ) controlling the νμνsνe oscillation must be much smaller than 0.01 required to fit the MiniBooNE anomaly. The disagreement between the appearance and disappearance data had already existed before, but was actually made worse by the MiniBooNE update.
So the hypothesis of a 4th sterile neutrino does not stand scrutiny as an explanation of the MiniBooNE anomaly. It does not mean that there is no other possible explanation (more sterile neutrinos? non-standard interactions? neutrino decays?). However, any realistic model will have to delve deep into the crazy side in order to satisfy the constraints from other neutrino experiments, flavor physics, and cosmology. Fortunately, the current confusing situation should not last forever. The MiniBooNE photon background from π0 decays may be clarified by the ongoing MicroBooNE experiment. On the timescale of a few years the controversy should be closed by the SBN program in Fermilab, which will add one near and one far detector to the MicroBooNE beamline. Until then... years of painful experience have taught us to assign a high prior to the Standard Model hypothesis. Currently, by far the most plausible explanation of the existing data is an experimental error on the part of the MiniBooNE collaboration.

Monday, 28 May 2018

WIMPs after XENON1T

After today's update from the XENON1T experiment, the situation on the front of direct detection of WIMP dark matter is as follows

WIMP can be loosely defined as a dark matter particle with mass in the 1 GeV - 10 TeV range and significant interactions with ordinary matter. Historically, WIMP searches have stimulated enormous interest because this type of dark matter can be easily realized in models with low scale supersymmetry. Now that we are older and wiser, many physicists would rather put their money on other realizations, such as axions, MeV dark matter, or primordial black holes. Nevertheless, WIMPs remain a viable possibility that should be further explored.
 
To detect WIMPs heavier than a few GeV, currently the most successful strategy is to use huge detectors filled with xenon atoms, hoping one of them is hit by a passing dark matter particle. Xenon1T beats the competition from the LUX and Panda-X experiments because it has a bigger gun tank. Technologically speaking, we have come a long way in the last 30 years. XENON1T is now sensitive to 40 GeV WIMPs interacting with nucleons with the cross section of 40 yoctobarn (1 yb = 10^-12 pb = 10^-48 cm^2). This is 6 orders of magnitude better than what the first direct detection experiment in the Homestake mine could achieve back in the 80s. Compared to the last year, the  limit is better by a factor of two at the most sensitive mass point. At high mass the improvement is somewhat smaller than expected due to a small excess of events observed by XENON1T, which is probably just a 1 sigma upward fluctuation of the background.

What we are learning about WIMPs is how they can (or cannot) interact with us. Of course, at this point in the game we don't see qualitative progress, but rather incremental quantitative improvements. One possible scenario is that WIMPs experience one of the Standard Model forces,  such as the weak or the Higgs force. The former option is strongly constrained by now. If WIMPs had interacted in the same way as our neutrino does, that is by exchanging a Z boson,  it would have been found in the Homestake experiment. Xenon1T is probing models where the dark matter coupling to the Z boson is suppressed by a factor cχ ~ 10^-3 - 10^-4 compared to that of an active neutrino. On the other hand, dark matter could be participating in weak interactions only by exchanging W bosons, which can happen for example when it is a part of an SU(2) triplet. In the plot you can see that XENON1T is approaching but not yet excluding this interesting possibility. As for models using the Higgs force, XENON1T is probing the (subjectively) most natural parameter space where WIMPs couple with order one strength to the Higgs field. 

And the arms race continues. The search in XENON1T will go on until the end of this year, although at this point a discovery is extremely unlikely. Further progress is expected on a timescale of a few years thanks to the next generation xenon detectors XENONnT and LUX-ZEPLIN, which should achieve yoctobarn sensitivity. DARWIN may be the ultimate experiment along these lines, in the sense that there is no prefix smaller than yocto it will reach the irreducible background from atmospheric neutrinos, after which new detection techniques will be needed.  For dark matter mass closer to 1 GeV, several orders of magnitude of pristine parameter space will be covered by the SuperCDMS experiment. Until then we are kept in suspense. Is dark matter made of WIMPs? And if yes, does it stick above the neutrino sea?

Wednesday, 16 May 2018

Proton's weak charge, and what's it for


In the particle world the LHC still attracts the most attention, but in parallel there is ongoing progress at the low-energy frontier. A new episode in that story is the Qweak experiment in Jefferson Lab in the US, which just published their final results.  Qweak was shooting a beam of 1 GeV electrons on a hydrogen (so basically proton) target to determine how the scattering rate depends on electron's polarization. Electrons and protons interact with each other via the electromagnetic and weak forces. The former is much stronger, but it is parity-invariant, i.e. it does not care about the direction of polarization. On the other hand, since the classic Wu experiment in 1956, the weak force is known to violate parity. Indeed, the Standard Model postulates that the Z boson, who mediates the weak force,  couples with different strength to left- and right-handed particles. The resulting asymmetry between the low-energy electron-proton scattering cross sections of left- and right-handed polarized electrons is predicted to be at the 10^-7 level. That has been experimentally observed many times before, but Qweak was able to measure it with the best precision to date (relative 4%), and at a lower momentum transfer than the previous experiments.   

What is the point of this exercise? Low-energy parity violation experiments are often sold as precision measurements of the so-called Weinberg angle, which is a function of the electroweak gauge couplings - the fundamental parameters of the Standard Model. I don't like too much that perspective because the electroweak couplings, and thus the Weinberg angle, can be more precisely determined from other observables, and Qweak is far from achieving a competing accuracy. The utility of Qweak is better visible in the effective theory picture. At low energies one can parameterize the relevant parity-violating interactions between protons and electrons by the contact term
where v ≈ 246 GeV, and QW is the so-called weak charge of the proton. Such interactions arise thanks to the Z boson in the Standard Model being exchanged between electrons and quarks that make up the proton. At low energies, the exchange diagram is well approximated by the contact term above with QW = 0.0708  (somewhat smaller than the "natural" value QW ~ 1  due to numerical accidents making the Z boson effectively protophobic). The measured polarization asymmetry in electron-proton scattering can be re-interpreted as a determination of the proton weak charge: QW = 0.0719 ± 0.0045, in perfect agreement with the Standard Model prediction.

New physics may affect the magnitude of the proton weak charge in two distinct ways. One is by altering the strength with which the Z boson couples to matter. This happens for example when light quarks mix with their heavier exotic cousins with different quantum numbers, as is often the case in the models from the Randall-Sundrum family. More generally, modified couplings to the Z boson could be a sign of quark compositeness. Another way is by generating new parity-violating contact interactions between electrons and quarks. This can be a result of yet unknown short-range forces which distinguish left- and right-handed electrons. Note that the observation of lepton flavor violation in B-meson decays can be interpreted as a hint for existence of such forces (although for that purpose the new force carriers do not need to couple to 1st generation quarks).  Qweak's measurement puts novel limits on such broad scenarios. Whatever the origin, simple dimensional analysis allows one to estimate  the possible change of the proton weak charge as 
   where M* is the mass scale of new particles beyond the Standard Model, and g* is their coupling strength to matter. Thus, Qweak can constrain new weakly coupled particles with masses up to a few TeV, or even 50 TeV particles if they are strongly coupled to matter (g*~4π).

What is the place of Qweak in the larger landscape of precision experiments? One can illustrate it by considering a simple example where heavy new physics modifies only the vector couplings of the Z boson to up and down quarks. The best existing constraints on such a scenario are displayed in this plot:
From the size of the rotten egg region you see that the Z boson couplings to light quarks are currently known with a per-mille accuracy. Somewhat surprisingly, the LEP collider, which back in the 1990s produced tens of millions of Z boson to precisely study their couplings, is not at all the leader in this field. In fact, better constraints come from precision measurements at very low energies: pion, kaon, and neutron decays,  parity-violating transitions in cesium atoms,  and the latest Qweak results which make a difference too. The importance of Qweak is even more pronounced in more complex scenarios where the parameter space is multi-dimensional.

Qweak is certainly not the last salvo on the low-energy frontier. Similar but more precise experiments are being prepared as we read (I wish the follow up were called SuperQweak, or SQweak in short). Who knows, maybe quarks are made of more fundamental building blocks at the scale of ~100 TeV,  and we'll first find it out thanks to parity violation at very low energies. 

Monday, 7 May 2018

Dark Matter goes sub-GeV

It must have been great to be a particle physicist in the 1990s. Everything was simple and clear then. They knew that, at the most fundamental level, nature was described by one of the five superstring theories which, at low energies, reduced to the Minimal Supersymmetric Standard Model. Dark matter also had a firm place in this narrative, being identified with the lightest neutralino of the MSSM. This simple-minded picture strongly influenced the experimental program of dark matter detection, which was almost entirely focused on the so-called WIMPs in the 1 GeV - 1 TeV mass range. Most of the detectors, including the current leaders XENON and LUX, are blind to sub-GeV dark matter, as slow and light incoming particles are unable to transfer a detectable amount of energy to the target nuclei.

Sometimes progress consists in realizing that you know nothing Jon Snow. The lack of new physics at the LHC invalidates most of the historical motivations for WIMPs. Theoretically, the mass of the dark matter particle could be anywhere between 10^-30 GeV and 10^19 GeV. There are myriads of models positioned anywhere in that range, and it's hard to argue with a straight face that any particular one is favored. We now know that we don't know what dark matter is, and that we should better search in many places. If anything, the small-scale problem of the 𝞚CDM cosmological model can be interpreted as a hint against the boring WIMPS and in favor of light dark matter. For example, if it turns out that dark matter has significant (nuclear size) self-interactions, that can only be realized with sub-GeV particles. 
                       
It takes some time for experiment to catch up with theory, but the process is already well in motion. There is some fascinating progress on the front of ultra-light axion dark matter, which deserves a separate post. Here I want to highlight the ongoing  developments in direct detection of dark matter particles with masses between MeV and GeV. Until recently, the only available constraint in that regime was obtained by recasting data from the XENON10 experiment - the grandfather of the currently operating XENON1T.  In XENON detectors there are two ingredients of the signal generated when a target nucleus is struck:  ionization electrons and scintillation photons. WIMP searches require both to discriminate signal from background. But MeV dark matter interacting with electrons could eject electrons from xenon atoms without producing scintillation. In the standard analysis, such events would be discarded as background. However,  this paper showed that, recycling the available XENON10 data on ionization-only events, one can exclude dark matter in the 100 MeV ballpark with the cross section for scattering on electrons larger than ~0.01 picobarn (10^-38 cm^2). This already has non-trivial consequences for concrete models; for example, a part of the parameter space of milli-charged dark matter is currently best constrained by XENON10.   

It is remarkable that so much useful information can be extracted by basically misusing data collected for another purpose (earlier this year the DarkSide-50 recast their own data in the same manner, excluding another chunk of the parameter space).  Nevertheless, dedicated experiments will soon  be taking over. Recently, two collaborations published first results from their prototype detectors:  one is SENSEI, which uses 0.1 gram of silicon CCDs, and the other is SuperCDMS, which uses 1 gram of silicon semiconductor.  Both are sensitive to eV energy depositions, thanks to which they can extend the search region to lower dark matter mass regions, and set novel limits in the virgin territory between 0.5 and 5 MeV.  A compilation of the existing direct detection limits is shown in the plot. As you can see, above 5 MeV the tiny prototypes cannot yet beat the XENON10 recast. But that will certainly change as soon as full-blown detectors are constructed, after which the XENON10 sensitivity should be improved by several orders of magnitude.
     
Should we be restless waiting for these results? Well, for any single experiment the chance of finding nothing are immensely larger than that of finding something. Nevertheless, the technical progress and the widening scope of searches offer some hope that the dark matter puzzle may be solved soon.