Wednesday, 20 June 2018

Both g-2 anomalies

Two months ago an experiment in Berkeley announced a new ultra-precise measurement of the fine structure constant α using interferometry techniques. This wasn't much noticed because the paper is not on arXiv, and moreover this kind of research is filed under metrology, which is easily confused with meteorology. So it's worth commenting on why precision measurements of α could be interesting for particle physics. What the Berkeley group really did was to measure the mass of the cesium-133 atom, achieving the relative accuracy of 4*10^-10, that is 0.4 parts par billion (ppb). With that result in hand, α can be determined after a cavalier rewriting of the high-school formula for the Rydberg constant:   
Everybody knows the first 3 digits of the Rydberg constant, Ry≈13.6 eV, but actually it is experimentally known with the fantastic accuracy of 0.006 ppb, and the electron-to-atom mass ratio has also been determined precisely. Thus the measurement of the cesium mass can be translated into a 0.2 ppb measurement of the fine structure constant: 1/α=137.035999046(27).

You may think that this kind of result could appeal only to a Pythonesque chartered accountant. But you would be wrong. First of all, the new result excludes  α = 1/137 at 1 million sigma, dealing a mortal blow to the field of epistemological numerology. Perhaps more importantly, the result is relevant for testing the Standard Model. One place where precise knowledge of α is essential is in calculation of the magnetic moment of the electron. Recall that the g-factor is defined as the proportionality constant between the magnetic moment and the angular momentum. For the electron we have
Experimentally, ge is one of the most precisely determined quantities in physics,  with the most recent measurement quoting a= 0.00115965218073(28), that is 0.0001 ppb accuracy on ge, or 0.2 ppb accuracy on ae. In the Standard Model, ge is calculable as a function of α and other parameters. In the classical approximation ge=2, while the one-loop correction proportional to the first power of α was already known in prehistoric times thanks to Schwinger. The dots above summarize decades of subsequent calculations, which now include O(α^5) terms, that is 5-loop QED contributions! Thanks to these heroic efforts (depicted in the film  For a Few Diagrams More - a sequel to Kurosawa's Seven Samurai), the main theoretical uncertainty for the Standard Model prediction of ge is due to the experimental error on the value of α. The Berkeley measurement allows one to reduce the relative theoretical error on adown to 0.2 ppb:  ae = 0.00115965218161(23), which matches in magnitude the experimental error and improves by a factor of 3 the previous prediction based on the α measurement with rubidium atoms.

At the spiritual level, the comparison between the theory and experiment provides an impressive validation of quantum field theory techniques up to the 13th significant digit - an unimaginable  theoretical accuracy in other branches of science. More practically, it also provides a powerful test of the Standard Model. New particles coupled to the electron may contribute to the same loop diagrams from which ge is calculated, and could shift the observed value of ae away from the Standard Model predictions. In many models, corrections to the electron and muon magnetic moments are correlated. The latter famously deviates from the Standard Model prediction by 3.5 to 4 sigma, depending on who counts the uncertainties. Actually, if you bother to eye carefully the experimental and theoretical values of ae beyond the 10th significant digit you can see that they are also discrepant, this time at the 2.5 sigma level. So now we have two g-2 anomalies! In a picture, the situation can be summarized as follows:

If you're a member of the Holy Church of Five Sigma you can almost preach an unambiguous discovery of physics beyond the Standard Model. However, for most of us this is not the case yet. First, there is still some debate about the theoretical uncertainties entering the muon g-2 prediction. Second, while it is quite easy to fit each of the two anomalies separately, there seems to be no appealing model to fit both of them at the same time.  Take for example the very popular toy model with a new massive spin-1 Z' boson (aka the dark photon) kinetically mixed with the ordinary photon. In this case Z' has, much like the ordinary photon, vector-like and universal couplings to electron and muons. But this leads to a positive contribution to g-2, and it does not fit well the ae measurement which favors a new negative contribution. In fact, the ae measurement provides the most stringent constraint in part of the parameter space of the dark photon model. Conversely, a Z' boson with purely axial couplings to matter does not fit the data as it gives a negative contribution to g-2, thus making the muon g-2 anomaly worse. What might work is a hybrid model with a light Z' boson having lepton-flavor violating interactions: a vector coupling to muons and a somewhat smaller axial coupling to electrons. But constructing a consistent and realistic model along these lines is a challenge because of other experimental constraints (e.g. from the lack of observation of μ→eγ decays). Some food for thought can be found in this paper, but I'm not sure if a sensible model exists at the moment. If you know one you are welcome to drop a comment here or a paper on arXiv.

More excitement on this front is in store. The muon g-2 experiment in Fermilab should soon deliver first results which may confirm or disprove the muon anomaly. Further progress with the electron g-2 and fine-structure constant measurements is also expected in the near future. The biggest worry is that, if the accuracy improves by another two orders of magnitude, we will need to calculate six loop QED corrections... 

Tuesday, 5 June 2018

Can MiniBooNE be right?

The experimental situation in neutrino physics is confusing. One one hand, a host of neutrino experiments has established a consistent picture where the neutrino mass eigenstates are mixtures of the 3 Standard Model neutrino flavors νe, νμ, ντ. The measured mass differences between the eigenstates are Δm12^2 ≈ 7.5*10^-5 eV^2 and Δm13^2 ≈ 2.5*10^-3 eV^2, suggesting that all Standard Model neutrinos have masses below 0.1 eV. That is well in line with cosmological observations which find that the radiation budget of the early universe is consistent with the existence of exactly 3 neutrinos with the sum of the masses less than 0.2 eV. On the other hand, several rogue experiments refuse to conform to the standard 3-flavor picture. The most severe anomaly is the appearance of electron neutrinos in a muon neutrino beam observed by the LSND and MiniBooNE experiments.


This story begins in the previous century with the LSND experiment in Los Alamos, which claimed to observe νμνe antineutrino oscillations with 3.8σ significance.  This result was considered controversial from the very beginning due to limitations of the experimental set-up. Moreover, it was inconsistent with the standard 3-flavor picture which, given the masses and mixing angles measured by other experiments, predicted that νμνe oscillation should be unobservable in short-baseline (L ≼ km) experiments. The MiniBooNE experiment in Fermilab was conceived to conclusively prove or disprove the LSND anomaly. To this end, a beam of mostly muon neutrinos or antineutrinos with energies E~1 GeV is sent to a detector at the distance L~500 meters away. In general, neutrinos can change their flavor with the probability oscillating as P ~ sin^2(Δm^2 L/4E). If the LSND excess is really due to neutrino oscillations, one expects to observe electron neutrino appearance in the MiniBooNE detector given that L/E is similar in the two experiments. Originally, MiniBooNE was hoping to see a smoking gun in the form of an electron neutrino excess oscillating as a function of L/E, that is peaking at intermediate energies and then decreasing towards lower energies (possibly with several wiggles). That didn't happen. Instead, MiniBooNE finds an excess increasing towards low energies with a similar shape as the backgrounds. Thus the confusion lingers on: the LSND anomaly has neither been killed nor robustly confirmed.     

In spite of these doubts, the LSND and MiniBooNE anomalies continue to arouse interest. This is understandable: as the results do not fit the 3-flavor framework, if confirmed they would prove the existence of new physics beyond the Standard Model. The simplest fix would be to introduce a sterile neutrino νs with the mass in the eV ballpark, in which case MiniBooNE would be observing the νμνsνe oscillation chain. With the recent MiniBooNE update the evidence for the electron neutrino appearance increased to 4.8σ, which has stirred some commotion on Twitter and in the blogosphere. However, I find the excitement a bit misplaced. The anomaly is not really new: similar results showing a 3.8σ excess of νe-like events were already published in 2012.  The increase of the significance is hardly relevant: at this point we know anyway that the excess is not a statistical fluke, while a systematic effect due to underestimated backgrounds would also lead to a growing anomaly. If anything, there are now less reasons than in 2012 to believe in the sterile neutrino origin the MiniBooNE anomaly, as I will argue in the following.

What has changed since 2012? First, there are new constraints on νe appearance from the OPERA experiment (yes, this OPERA) who did not see any excess νe in the CERN-to-Gran-Sasso νμ beam. This excludes a large chunk of the relevant parameter space corresponding to large mixing angles between the active and sterile neutrinos. From this point of view, the MiniBooNE update actually adds more stress on the sterile neutrino interpretation by slightly shifting the preferred region towards larger mixing angles...  Nevertheless, a not-too-horrible fit to all appearance experiments can still be achieved in the region with Δm^2~0.5 eV^2 and the mixing angle sin^2(2θ) of order 0.01.     

Next, the cosmological constraints have become more stringent. The CMB observations by the Planck satellite do not leave room for an additional neutrino species in the early universe. But for the parameters preferred by LSND and MiniBooNE, the sterile neutrino would be abundantly produced in the hot primordial plasma, thus violating the Planck constraints. To avoid it, theorists need to deploy a battery of  tricks (for example, large sterile-neutrino self-interactions), which makes realistic models rather baroque.

But the killer punch is delivered by disappearance analyses. Benjamin Franklin famously said that only two things in this world were certain: death and probability conservation. Thus whenever an electron neutrino appears in a νμ beam, a muon neutrino must disappear. However, the latter process is severely constrained by long-baseline neutrino experiments, and recently the limits have been further strengthened thanks to the MINOS and IceCube collaborations. A recent combination of the existing disappearance results is available in this paper.  In the 3+1 flavor scheme, the probability of a muon neutrino transforming into an electron  one in a short-baseline experiment is
where U is the 4x4 neutrino mixing matrix.  The Uμ4 matrix elements controls also the νμ survival probability
The νμ disappearance data from MINOS and IceCube imply |Uμ4|≼0.1, while |Ue4|≼0.25 from solar neutrino observations. All in all, the disappearance results imply that the effective mixing angle sin^2(2θ) controlling the νμνsνe oscillation must be much smaller than 0.01 required to fit the MiniBooNE anomaly. The disagreement between the appearance and disappearance data had already existed before, but was actually made worse by the MiniBooNE update.
So the hypothesis of a 4th sterile neutrino does not stand scrutiny as an explanation of the MiniBooNE anomaly. It does not mean that there is no other possible explanation (more sterile neutrinos? non-standard interactions? neutrino decays?). However, any realistic model will have to delve deep into the crazy side in order to satisfy the constraints from other neutrino experiments, flavor physics, and cosmology. Fortunately, the current confusing situation should not last forever. The MiniBooNE photon background from π0 decays may be clarified by the ongoing MicroBooNE experiment. On the timescale of a few years the controversy should be closed by the SBN program in Fermilab, which will add one near and one far detector to the MicroBooNE beamline. Until then... years of painful experience have taught us to assign a high prior to the Standard Model hypothesis. Currently, by far the most plausible explanation of the existing data is an experimental error on the part of the MiniBooNE collaboration.

Monday, 28 May 2018

WIMPs after XENON1T

After today's update from the XENON1T experiment, the situation on the front of direct detection of WIMP dark matter is as follows

WIMP can be loosely defined as a dark matter particle with mass in the 1 GeV - 10 TeV range and significant interactions with ordinary matter. Historically, WIMP searches have stimulated enormous interest because this type of dark matter can be easily realized in models with low scale supersymmetry. Now that we are older and wiser, many physicists would rather put their money on other realizations, such as axions, MeV dark matter, or primordial black holes. Nevertheless, WIMPs remain a viable possibility that should be further explored.
 
To detect WIMPs heavier than a few GeV, currently the most successful strategy is to use huge detectors filled with xenon atoms, hoping one of them is hit by a passing dark matter particle. Xenon1T beats the competition from the LUX and Panda-X experiments because it has a bigger gun tank. Technologically speaking, we have come a long way in the last 30 years. XENON1T is now sensitive to 40 GeV WIMPs interacting with nucleons with the cross section of 40 yoctobarn (1 yb = 10^-12 pb = 10^-48 cm^2). This is 6 orders of magnitude better than what the first direct detection experiment in the Homestake mine could achieve back in the 80s. Compared to the last year, the  limit is better by a factor of two at the most sensitive mass point. At high mass the improvement is somewhat smaller than expected due to a small excess of events observed by XENON1T, which is probably just a 1 sigma upward fluctuation of the background.

What we are learning about WIMPs is how they can (or cannot) interact with us. Of course, at this point in the game we don't see qualitative progress, but rather incremental quantitative improvements. One possible scenario is that WIMPs experience one of the Standard Model forces,  such as the weak or the Higgs force. The former option is strongly constrained by now. If WIMPs had interacted in the same way as our neutrino does, that is by exchanging a Z boson,  it would have been found in the Homestake experiment. Xenon1T is probing models where the dark matter coupling to the Z boson is suppressed by a factor cχ ~ 10^-3 - 10^-4 compared to that of an active neutrino. On the other hand, dark matter could be participating in weak interactions only by exchanging W bosons, which can happen for example when it is a part of an SU(2) triplet. In the plot you can see that XENON1T is approaching but not yet excluding this interesting possibility. As for models using the Higgs force, XENON1T is probing the (subjectively) most natural parameter space where WIMPs couple with order one strength to the Higgs field. 

And the arms race continues. The search in XENON1T will go on until the end of this year, although at this point a discovery is extremely unlikely. Further progress is expected on a timescale of a few years thanks to the next generation xenon detectors XENONnT and LUX-ZEPLIN, which should achieve yoctobarn sensitivity. DARWIN may be the ultimate experiment along these lines, in the sense that there is no prefix smaller than yocto it will reach the irreducible background from atmospheric neutrinos, after which new detection techniques will be needed.  For dark matter mass closer to 1 GeV, several orders of magnitude of pristine parameter space will be covered by the SuperCDMS experiment. Until then we are kept in suspense. Is dark matter made of WIMPs? And if yes, does it stick above the neutrino sea?

Wednesday, 16 May 2018

Proton's weak charge, and what's it for


In the particle world the LHC still attracts the most attention, but in parallel there is ongoing progress at the low-energy frontier. A new episode in that story is the Qweak experiment in Jefferson Lab in the US, which just published their final results.  Qweak was shooting a beam of 1 GeV electrons on a hydrogen (so basically proton) target to determine how the scattering rate depends on electron's polarization. Electrons and protons interact with each other via the electromagnetic and weak forces. The former is much stronger, but it is parity-invariant, i.e. it does not care about the direction of polarization. On the other hand, since the classic Wu experiment in 1956, the weak force is known to violate parity. Indeed, the Standard Model postulates that the Z boson, who mediates the weak force,  couples with different strength to left- and right-handed particles. The resulting asymmetry between the low-energy electron-proton scattering cross sections of left- and right-handed polarized electrons is predicted to be at the 10^-7 level. That has been experimentally observed many times before, but Qweak was able to measure it with the best precision to date (relative 4%), and at a lower momentum transfer than the previous experiments.   

What is the point of this exercise? Low-energy parity violation experiments are often sold as precision measurements of the so-called Weinberg angle, which is a function of the electroweak gauge couplings - the fundamental parameters of the Standard Model. I don't like too much that perspective because the electroweak couplings, and thus the Weinberg angle, can be more precisely determined from other observables, and Qweak is far from achieving a competing accuracy. The utility of Qweak is better visible in the effective theory picture. At low energies one can parameterize the relevant parity-violating interactions between protons and electrons by the contact term
where v ≈ 246 GeV, and QW is the so-called weak charge of the proton. Such interactions arise thanks to the Z boson in the Standard Model being exchanged between electrons and quarks that make up the proton. At low energies, the exchange diagram is well approximated by the contact term above with QW = 0.0708  (somewhat smaller than the "natural" value QW ~ 1  due to numerical accidents making the Z boson effectively protophobic). The measured polarization asymmetry in electron-proton scattering can be re-interpreted as a determination of the proton weak charge: QW = 0.0719 ± 0.0045, in perfect agreement with the Standard Model prediction.

New physics may affect the magnitude of the proton weak charge in two distinct ways. One is by altering the strength with which the Z boson couples to matter. This happens for example when light quarks mix with their heavier exotic cousins with different quantum numbers, as is often the case in the models from the Randall-Sundrum family. More generally, modified couplings to the Z boson could be a sign of quark compositeness. Another way is by generating new parity-violating contact interactions between electrons and quarks. This can be a result of yet unknown short-range forces which distinguish left- and right-handed electrons. Note that the observation of lepton flavor violation in B-meson decays can be interpreted as a hint for existence of such forces (although for that purpose the new force carriers do not need to couple to 1st generation quarks).  Qweak's measurement puts novel limits on such broad scenarios. Whatever the origin, simple dimensional analysis allows one to estimate  the possible change of the proton weak charge as 
   where M* is the mass scale of new particles beyond the Standard Model, and g* is their coupling strength to matter. Thus, Qweak can constrain new weakly coupled particles with masses up to a few TeV, or even 50 TeV particles if they are strongly coupled to matter (g*~4π).

What is the place of Qweak in the larger landscape of precision experiments? One can illustrate it by considering a simple example where heavy new physics modifies only the vector couplings of the Z boson to up and down quarks. The best existing constraints on such a scenario are displayed in this plot:
From the size of the rotten egg region you see that the Z boson couplings to light quarks are currently known with a per-mille accuracy. Somewhat surprisingly, the LEP collider, which back in the 1990s produced tens of millions of Z boson to precisely study their couplings, is not at all the leader in this field. In fact, better constraints come from precision measurements at very low energies: pion, kaon, and neutron decays,  parity-violating transitions in cesium atoms,  and the latest Qweak results which make a difference too. The importance of Qweak is even more pronounced in more complex scenarios where the parameter space is multi-dimensional.

Qweak is certainly not the last salvo on the low-energy frontier. Similar but more precise experiments are being prepared as we read (I wish the follow up were called SuperQweak, or SQweak in short). Who knows, maybe quarks are made of more fundamental building blocks at the scale of ~100 TeV,  and we'll first find it out thanks to parity violation at very low energies. 

Monday, 7 May 2018

Dark Matter goes sub-GeV

It must have been great to be a particle physicist in the 1990s. Everything was simple and clear then. They knew that, at the most fundamental level, nature was described by one of the five superstring theories which, at low energies, reduced to the Minimal Supersymmetric Standard Model. Dark matter also had a firm place in this narrative, being identified with the lightest neutralino of the MSSM. This simple-minded picture strongly influenced the experimental program of dark matter detection, which was almost entirely focused on the so-called WIMPs in the 1 GeV - 1 TeV mass range. Most of the detectors, including the current leaders XENON and LUX, are blind to sub-GeV dark matter, as slow and light incoming particles are unable to transfer a detectable amount of energy to the target nuclei.

Sometimes progress consists in realizing that you know nothing Jon Snow. The lack of new physics at the LHC invalidates most of the historical motivations for WIMPs. Theoretically, the mass of the dark matter particle could be anywhere between 10^-30 GeV and 10^19 GeV. There are myriads of models positioned anywhere in that range, and it's hard to argue with a straight face that any particular one is favored. We now know that we don't know what dark matter is, and that we should better search in many places. If anything, the small-scale problem of the 𝞚CDM cosmological model can be interpreted as a hint against the boring WIMPS and in favor of light dark matter. For example, if it turns out that dark matter has significant (nuclear size) self-interactions, that can only be realized with sub-GeV particles. 
                       
It takes some time for experiment to catch up with theory, but the process is already well in motion. There is some fascinating progress on the front of ultra-light axion dark matter, which deserves a separate post. Here I want to highlight the ongoing  developments in direct detection of dark matter particles with masses between MeV and GeV. Until recently, the only available constraint in that regime was obtained by recasting data from the XENON10 experiment - the grandfather of the currently operating XENON1T.  In XENON detectors there are two ingredients of the signal generated when a target nucleus is struck:  ionization electrons and scintillation photons. WIMP searches require both to discriminate signal from background. But MeV dark matter interacting with electrons could eject electrons from xenon atoms without producing scintillation. In the standard analysis, such events would be discarded as background. However,  this paper showed that, recycling the available XENON10 data on ionization-only events, one can exclude dark matter in the 100 MeV ballpark with the cross section for scattering on electrons larger than ~0.01 picobarn (10^-38 cm^2). This already has non-trivial consequences for concrete models; for example, a part of the parameter space of milli-charged dark matter is currently best constrained by XENON10.   

It is remarkable that so much useful information can be extracted by basically misusing data collected for another purpose (earlier this year the DarkSide-50 recast their own data in the same manner, excluding another chunk of the parameter space).  Nevertheless, dedicated experiments will soon  be taking over. Recently, two collaborations published first results from their prototype detectors:  one is SENSEI, which uses 0.1 gram of silicon CCDs, and the other is SuperCDMS, which uses 1 gram of silicon semiconductor.  Both are sensitive to eV energy depositions, thanks to which they can extend the search region to lower dark matter mass regions, and set novel limits in the virgin territory between 0.5 and 5 MeV.  A compilation of the existing direct detection limits is shown in the plot. As you can see, above 5 MeV the tiny prototypes cannot yet beat the XENON10 recast. But that will certainly change as soon as full-blown detectors are constructed, after which the XENON10 sensitivity should be improved by several orders of magnitude.
     
Should we be restless waiting for these results? Well, for any single experiment the chance of finding nothing are immensely larger than that of finding something. Nevertheless, the technical progress and the widening scope of searches offer some hope that the dark matter puzzle may be solved soon.

Thursday, 19 April 2018

Massive Gravity, or You Only Live Twice

Proving Einstein wrong is the ultimate ambition of every crackpot and physicist alike. In particular, Einstein's theory of gravitation -  the general relativity -  has been a victim of constant harassment. That is to say, it is trivial to modify gravity at large energies (short distances), for example by embedding it in string theory, but it is notoriously difficult to change its long distance behavior. At the same time, motivations to keep trying go beyond intellectual gymnastics. For example, the accelerated expansion of the universe may be a manifestation of modified gravity (rather than of a small cosmological constant).   

In Einstein's general relativity, gravitational interactions are mediated by a massless spin-2 particle - the so-called graviton. This is what gives it its hallmark properties: the long range and the universality. One obvious way to screw with Einstein is to add mass to the graviton, as entertained already in 1939 by Fierz and Pauli. The Particle Data Group quotes the constraint m ≤ 6*10^−32 eV, so we are talking about the De Broglie wavelength comparable to the size of the observable universe. Yet even that teeny mass may cause massive troubles. In 1970 the Fierz-Pauli theory was killed by the van Dam-Veltman-Zakharov (vDVZ) discontinuity. The problem stems from the fact that a massive spin-2 particle has 5 polarization states (0,±1,±2) unlike a massless one which has only two (±2). It turns out that the polarization-0 state couples to matter with the similar strength as the usual polarization ±2 modes, even in the limit where the mass goes to zero, and thus mediates an additional force which differs from the usual gravity. One finds that, in massive gravity, light bending would be 25% smaller, in conflict with the very precise observations of stars' deflection around the Sun. vDV concluded that "the graviton has rigorously zero mass". Dead for the first time...           

The second coming was heralded soon after by Vainshtein, who noticed that the troublesome polarization-0 mode can be shut off in the proximity of stars and planets. This can happen in the presence of graviton self-interactions of a certain type. Technically, what happens is that the polarization-0 mode develops a background value around massive sources which, through the derivative self-interactions, renormalizes its kinetic term and effectively diminishes its interaction strength with matter. See here for a nice review and more technical details. Thanks to the Vainshtein mechanism, the usual predictions of general relativity are recovered around large massive source, which is exactly where we can best measure gravitational effects. The possible self-interactions leading a healthy theory without ghosts have been classified, and go under the name of the dRGT massive gravity.

There is however one inevitable consequence of the Vainshtein mechanism. The graviton self-interaction strength grows with energy, and at some point becomes inconsistent with the unitarity limits that every quantum theory should obey. This means that massive gravity is necessarily an effective theory with a limited validity range and has to be replaced by a more fundamental theory at some cutoff scale 𝞚. This is of course nothing new for gravity: the usual Einstein gravity is also an effective theory valid at most up to the Planck scale MPl~10^19 GeV.  But for massive gravity the cutoff depends on the graviton mass and is much smaller for realistic theories. At best,
So the massive gravity theory in its usual form cannot be used at distance scales shorter than ~300 km. For particle physicists that would be a disaster, but for cosmologists this is fine, as one can still predict the behavior of galaxies, stars, and planets. While the theory certainly cannot be used to describe the results of table top experiments,  it is relevant for the  movement of celestial bodies in the Solar System. Indeed, lunar laser ranging experiments or precision studies of Jupiter's orbit are interesting probes of the graviton mass.

Now comes the latest twist in the story. Some time ago this paper showed that not everything is allowed  in effective theories.  Assuming the full theory is unitary, causal and local implies non-trivial constraints on the possible interactions in the low-energy effective theory. These techniques are suitable to constrain, via dispersion relations, derivative interactions of the kind required by the Vainshtein mechanism. Applying them to the dRGT gravity one finds that it is inconsistent to assume the theory is valid all the way up to 𝞚max. Instead, it must be replaced by a more fundamental theory already at a much lower cutoff scale,  parameterized as 𝞚 = g*^1/3 𝞚max (the parameter g* is interpreted as the coupling strength of the more fundamental theory). The allowed parameter space in the g*-m plane is showed in this plot:

Massive gravity must live in the lower left corner, outside the gray area  excluded theoretically  and where the graviton mass satisfies the experimental upper limit m~10^−32 eV. This implies g* ≼ 10^-10, and thus the validity range of the theory is some 3 order of magnitude lower than 𝞚max. In other words, massive gravity is not a consistent effective theory at distance scales below ~1 million km, and thus cannot be used to describe the motion of falling apples, GPS satellites or even the Moon. In this sense, it's not much of a competition to, say, Newton. Dead for the second time.   

Is this the end of the story? For the third coming we would need a more general theory with additional light particles beyond the massive graviton, which is consistent theoretically in a larger energy range, realizes the Vainshtein mechanism, and is in agreement with the current experimental observations. This is hard but not impossible to imagine. Whatever the outcome, what I like in this story is the role of theory in driving the progress, which is rarely seen these days. In the process, we have understood a lot of interesting physics whose relevance goes well beyond one specific theory. So the trip was certainly worth it, even if we find ourselves back at the departure point.

Monday, 9 April 2018

Per kaons ad astra

NA62 is a precision experiment at CERN. From their name you wouldn't suspect that they're doing anything noteworthy: the collaboration was running in the contest for the most unimaginative name, only narrowly losing to CMS...  NA62 employs an intense beam of charged kaons to search for the very rare decay K+ → 𝝿+ 𝜈 𝜈. The Standard Model predicts the branching fraction BR(K+ → 𝝿+ 𝜈 𝜈) = 8.4x10^-11 with a small, 10% theoretical uncertainty (precious stuff in the flavor business). The previous measurement by the BNL-E949 experiment reported BR(K+ → 𝝿+ 𝜈 𝜈) = (1.7 ± 1.1)x10^-10, consistent with the Standard Model, but still  leaving room for large deviations.  NA62 is expected to pinpoint the decay and measure the branching fraction with a 10% accuracy, thus severely constraining new physics contributions. The wires, pipes, and gory details of the analysis  were nicely summarized by Tommaso. Let me jump directly to explaining what is it good for from the theory point of view.

To this end it is useful to adopt the effective theory perspective. At a more fundamental level, the decay occurs due to the strange quark inside the kaon undergoing the transformation  sbardbar 𝜈 𝜈bar. In the Standard Model, the amplitude for that process is dominated by one-loop diagrams with W/Z bosons and heavy quarks. But kaons live at low energies and do not really see the fine details of the loop amplitude. Instead, they effectively see the 4-fermion contact interaction:
The mass scale suppressing this interaction is quite large, more than 1000 times larger than the W boson mass, which is due to the loop factor and small CKM matrix elements entering the amplitude. The strong suppression is the reason why the K+ → 𝝿+ 𝜈 𝜈  decay is so rare in the first place. The corollary is that even a small new physics effect inducing that effective interaction may dramatically change the branching fraction. Even a particle with a mass as large as 1 PeV coupled to the quarks and leptons with order one strength could produce an observable shift of the decay rate.  In this sense, NA62 is a microscope probing physics down to 10^-20 cm  distances, or up to PeV energies, well beyond the reach of the LHC or other colliders in this century. If the new particle is lighter, say order TeV mass, NA62 can be sensitive to a tiny milli-coupling of that particle to quarks and leptons.

So, from a model-independent perspective, the advantages  of studying the K+ → 𝝿+ 𝜈 𝜈  decay are quite clear. A less trivial question is what can the future NA62 measurements teach us about our cherished models of new physics. One interesting application is in the industry of explaining the apparent violation of lepton flavor universality in BK l+ l-, and BD l 𝜈 decays. Those anomalies involve the 3rd generation bottom quark, thus a priori they do not need to have anything to do with kaon decays. However, many of the existing models introduce flavor symmetries controlling the couplings of the new particles to matter (instead of just ad-hoc interactions to address the anomalies). The flavor symmetries may then relate the couplings of different quark generations, and thus predict  correlations between new physics contributions to B meson and to kaon decays. One nice example is illustrated in this plot:

The observable RD(*) parametrizes the preference for BD 𝜏 𝜈 over similar decays with electrons and muon, and its measurement by the BaBar collaboration deviates from the Standard Model prediction by roughly 3 sigma. The plot shows that, in a model based on U(2)xU(2) flavor symmetry, a significant contribution to RD(*) generically implies a large enhancement of BR(K+ → 𝝿+ 𝜈 𝜈), unless the model parameters are tuned to avoid that.  The anomalies in the BK(*) 𝜇 𝜇 decays can also be correlated with large effects in K+ → 𝝿+ 𝜈 𝜈, see here for an example. Finally, in the presence of new light invisible particles, such as axions, the NA62 observations can be polluted by exotic decay channels, such as e.g.  K+ → axion 𝝿+.

The  K+ → 𝝿+ 𝜈 𝜈 decay is by no means the magic bullet that will inevitably break the Standard Model.  It should be seen as one piece of a larger puzzle that may or may not provide crucial hints about new physics. For the moment, NA62 has analyzed only a small batch of data collected in 2016, and their error bars are still larger than those of BNL-E949. That should change soon when the 2017  dataset is analyzed. More data will be acquired this year, with 20 signal events expected  before the long LHC shutdown. Simultaneously, another experiment called KOTO studies an even more rare process where neutral kaons undergo the CP-violating decay KL → 𝝿0 𝜈 𝜈,  which probes the imaginary part of the effective operator written above. As I wrote recently, my feeling is that low-energy precision experiments are currently our best hope for a better understanding of fundamental interactions, and I'm glad to see a good pace of progress on this front.

Sunday, 1 April 2018

Singularity is now

Artificial intelligence (AI) is entering into our lives.  It's been 20 years now since the watershed moment of Deep Blue versus Garry Kasparov.  Today, people study the games of AlphaGo against itself to get a glimpse of what a superior intelligence would be like. But at the same time AI is getting better in copying human behavior.  Many Apple users have got emotionally attached to Siri. Computers have not only learnt  to drive cars, but also not to slow down when a pedestrian is crossing the road. The progress is very well visible to the bloggers community. Bots commenting under my posts have evolved well past !!!buy!!!viagra!!!cialis!!!hot!!!naked!!!  sort of thing. Now they refer to the topic of the post, drop an informed comment, an interesting remark,  or a relevant question, before pasting a link to a revenge porn website. Sometimes it's really a pity to delete those comments, as they can be more to-the-point than those written by human readers.   

AI is also entering the field of science at an accelerated pace, and particle physics is as usual in the avant-garde. It's not a secret that physics analyses for the LHC papers (even if finally signed by 1000s of humans) are in reality performed by neural networks, which are just beefed up versions of Alexa developed at CERN. The hottest topic in high-energy physics experiment is now machine learning,  where computers teach  humans the optimal way of clustering jets, or telling quarks from gluons. The question is when, not if, AI will become sophisticated enough to perform a creative work of theoreticians. 

It seems that the answer is now.

Some of you might have noticed a certain Alan Irvine, affiliated with the Los Alamos National Laboratory, regularly posting on arXiv single-author theoretical papers on fashionable topics such as the ATLAS diphoton excess, LHCb B-meson anomalies, DAMPE spectral feature, etc. Many of us have received emails from this author requesting citations. Recently I got one myself; it seemed overly polite, but otherwise it didn't differ in relevance or substance from other similar requests. During the last two and half years,  A. Irvine has accumulated a decent h-factor of 18.  His papers have been submitted to prestigious journals in the field, such as the PRL, JHEP, or PRD, and some of them were even accepted after revisions. The scandal broke out a week ago when a JHEP editor noticed that the extensive revision, together with a long cover letter, was submitted within 10 seconds from receiving the referee's comments. Upon investigation, it turned out that A. Irvine never worked in Los Alamos, nobody in the field has ever met him in person, and the IP from which the paper was submitted was that of the well-known Ragnarok Thor server. A closer analysis of his past papers showed that, although linguistically and logically correct, they were merely a compilation of equations and text from the previous literature without any original addition. 

Incidentally, arXiv administrators have been aware that, since a few years, all source files in daily hep-ph listings were downloaded for an unknown purpose by automated bots. When you have excluded the impossible, whatever remains, however improbable, must be the truth. There is no doubt that A. Irvine is an AI bot, that was trained on the real hep-ph input to produce genuinely-looking  particle theory papers.     

The works of A. Irvine have been quietly removed from arXiv and journals, but difficult questions remain. What was the purpose of it? Was it a spoof? A parody? A social experiment? A Facebook research project? A Russian provocation?  And how could it pass unnoticed for so long within  the theoretical particle community?  What's most troubling is that, if there was one, there can easily be more. Which other papers on arXiv are written by AI? How can we recognize them?  Should we even try, or maybe the dam is already broken and we have to accept the inevitable?  Is Résonaances written by a real person? How can you be sure that you are real?

Update: obviously, this post is an April Fools' prank. It is absolutely unthinkable that the creative process of writing modern particle theory papers can ever be automatized. Also, the neural network referred to in the LHC papers is nothing like Alexa; it's simply a codename for PhD students.  Finally, I assure you that Résonaances is written by a hum 00105e0 e6b0 343b 9c74 0804 e7bc 0804 e7d5 0804 [core dump]

Wednesday, 21 March 2018

21cm to dark matter

The EDGES discovery of the 21cm absorption line at the cosmic dawn has been widely discussed on blogs and in popular press. Quite deservedly so.  The observation opens a new window on the epoch when the universe as we know it was just beginning. We expect a treasure trove of information about the standard processes happening in the early universe, as well as novel constraints on hypothetical particles that might have been present then. It is not a very long shot to speculate that, if confirmed, the EDGES discovery will be awarded a Nobel prize. On the other hand, the bold claim bundled with their experimental result -  that the unexpectedly large strength of the signal is an indication of interaction between the ordinary matter and cold dark matter - is very controversial. 


But before jumping to dark matter it is worth reviewing the standard physics leading to the EDGES signal. In the lowest energy (singlet) state, hydrogen may absorb a photon and jump to a slightly excited (triplet) state which differs from the true ground state just by the arrangement of the proton and electron spins. Such transitions are induced by photons of wavelength of 21cm, or frequency of 1.4 GHz, or energy of 5.9 𝜇eV, and they may routinely occur at the cosmic dawn when Cosmic Microwave Background (CMB) photons of the right energy hit neutral hydrogen atoms hovering in the universe. The evolution of the CMB and hydrogen temperatures is shown in the picture here as a function of the cosmological redshift z (large z is early time, z=0 is today). The CMB temperature is red and it decreases with time as (1+z) due to the expansion of the universe. The hydrogen temperature in blue is a bit more tricky. At the recombination time around z=1100 most proton and electrons combine to form neutral atoms, however a small fraction of free electrons and protons survives. Interactions between the electrons and CMB photons via Compton scattering are strong enough to keep the two (and consequently the hydrogen as well) at equal temperatures for some time.  However, around z=200 the CMB and hydrogen temperatures decouple, and the latter subsequently decreases much faster with time, as (1+z)^2. At the cosmic dawn, z~17, the hydrogen gas is already 7 times colder than the CMB, after which light from the first stars heats it up and ionizes it again.

The quantity directly relevant for the 21cm absorption signal is the so-called spin temperature Ts, which is a measure of the relative occupation number of the singlet and triplet hydrogen states. Just before the cosmic dawn, the spin temperature equals the CMB one, and as  a result there is no net absorption or emission of 21cm photons. However, it is believed that the light from the first stars initially lowers the spin temperature down to the hydrogen one. Therefore, there should be absorption of 21cm CMB photons by the hydrogen in the epoch between z~20 and z~15. After taking into account the cosmological redshift, one should now observe a dip in the radio frequencies between 70 and 90 MHz. This is roughly what EDGES finds. The depth of the dip is described by the formula:
 As the spin temperature cannot be lower than that of the hydrogen, the standard physics predicts TCMB/Ts ≼ 7 corresponding  T21 ≽ -0.2K. The surprise is that EDGES observes a larger dip, T21 ≈ -0.5K, 3.8 astrosigma away from the predicted value, as if TCMB/Ts were of order 15.

If the EDGES result is taken at face value, it means that TCMB/Ts at the cosmic dawn was much larger than predicted in the standard scenario.  Either there was a lot more photon radiation at the relevant wavelengths, or the hydrogen gas was much colder than predicted. Focusing on the latter possibility, one could imagine that the hydrogen was cooled due to interactions with cold dark matter  made of relatively light (less than GeV) particles. However, this idea very difficult to realize in practice, because it requires the interaction cross section to be thousands of barns at the relevant epoch! Not picobarns typical for WIMPs. Many orders of magnitude more than the total proton-proton cross section at the LHC. Even in nuclear processes such values are rarely seen.  And we are talking here about dark matter, whose trademark is interacting weakly.   Obviously, the idea is running into all sorts of constraints that have been laboriously accumulated over the years.     
       
One can try to save this idea by a series of evasive tricks. If the interaction cross section scales as 1/v^4, where v is the relative velocity between colliding matter and dark matter particles, it could be enhanced at the cosmic dawn when the typical velocities were at its minimum. The 1/v^4 behavior is not unfamiliar, as it is characteristic of the electromagnetic forces in the non-relativistic limit. Thus, one could envisage a model where dark matter has a minuscule electric charge, one thousandth or less that of the proton. This trick buys some mileage, but the obstacles remain enormous. The cross section is still large enough for the dark and ordinary matter to couple strongly during the recombination epoch, contrary to what is concluded from precision observations of the CMB. Therefore the milli-charge particles can constitute only  a small fraction of dark matter, less then 1 percent. Finally, one needs to avoid constraints from direct detection, colliders, and emission by stars and supernovae.  A plot borrowed from this paper shows that a tiny region of viable parameter space remains around 100 MeV mass and 10^-5 charge, though my guess is that this will also go away upon a more careful analysis.

So, milli-charge dark matter cooling hydrogen does not stand scrutiny as an explanation for the EDGES anomaly. This does not mean that all exotic explanations must be so implausible. Better models are being and will be proposed, and one of them could even be correct. For example, models where new particles lead to an injection of additional 21cm photons at early times seem to be more encouraging.  My bet? Future observations will confirm the 21cm absorption signal, but the amplitude and other features will turn out to be consistent with the standard 𝞚CDM predictions. Given the number of competing experiments in the starting blocks, the issue should be clarified within the next few years. What is certain is that, this time,  we will learn a lot whether or not the anomalous signal persists :)

Wednesday, 14 March 2018

Where were we?

Last time this blog was active, particle physics was entering a sharp curve. That the infamous 750 GeV resonance had petered out was not a big deal in itself - one expects these things to happen every now and then.  But the lack of any new physics at the LHC when it had already collected a significant chunk of data was a reason to worry. We know that we don't know everything yet about the fundamental interactions, and that there is a deeper layer of reality that needs to be uncovered (at least to explain dark matter, neutrino masses, baryogenesis, inflation, and physics at energies above the Planck scale). For a hundred years, increasing the energy of particle collisions has been the best way to increase our understanding of the basic constituents of nature. However, with nothing at the LHC and the next higher energy collider decades away, a feeling was growing that the progress might stall.

In this respect, nothing much has changed during the time when the blog was dormant, except that these sentiments are now firmly established. Crisis is no longer a whispered word, but it's openly discussed in corridors, on blogs, on arXiv, and in color magazines.  The clear message from the LHC is that the dominant paradigms about the physics at the weak scale were completely misguided. The Standard Model seems to be a perfect effective theory at least up to a few TeV, and there is no indication at what energy scale new particles have to show up. While everyone goes through the five stages of grief at their own pace, my impression is that most are already well past the denial. The open question is what should be the next steps to make sure that exploration of fundamental interactions will not halt. 

One possible reaction to a crisis is more of the same.  Historically, such an approach has often been efficient, for example it worked for a long time in the case of the Soviet economy. In our case one could easily go on with more models, more epicycles, more parameter space,  more speculations.  But the driving force for all these SusyWarpedCompositeStringBlackHairyHole enterprise has always been the (small but still) possibility of being vindicated by the LHC. Without serious prospects of experimental verification, model building is reduced to intellectual gymnastics that can hardly stir imagination.  Thus the business-as-usual is not an option in the long run: it couldn't elicit any enthusiasm among the physicists or the public,  it wouldn't attract new bright students, and thus it would be a straight path to irrelevance.

So, particle physics has to change. On the experimental side we will inevitably see, just for economical reasons, less focus on high-energy colliders and more on smaller experiments. Theoretical particle physics will also have to evolve to remain relevant.  Certainly, the emphasis needs to be shifted away from empty speculations in favor of more solid research. I don't pretend to know all the answers or have a clear vision of the optimal strategy, but I see three promising directions.

One is astrophysics where there are much better prospects of experimental progress.  The cosmos is a natural collider that is constantly testing fundamental interactions independently of current fashions or funding agencies.  This gives us an opportunity to learn more  about dark matter and neutrinos, and also about various hypothetical particles like axions or milli-charged matter. The most recent story of the 21cm absorption signal shows that there are still treasure troves of data waiting for us out there. Moreover, new observational windows keep opening up, as recently illustrated by the nascent gravitational wave astronomy. This avenue is of course a non-brainer, already explored since a long time by particle theorists, but I expect it will further gain in importance in the coming years. 

Another direction is precision physics. This, also, has been an integral part of particle physics research for quite some time, but it should grow in relevance. The point is that one can probe very heavy particles, often beyond the reach of present colliders,  by precisely measuring low-energy observables. In the most spectacular example, studying proton decay may give insight into new particles with masses of order 10^16 GeV - unlikely to be ever attainable directly. There is a whole array of observables that can probe new physics well beyond the direct LHC reach: a myriad of rare flavor processes, electric dipole moments of the electron and neutron, atomic parity violation, neutrino scattering,  and so on. This road may be long and tedious but it is bound to succeed: at some point some experiment somewhere must observe a phenomenon that does not fit into the Standard Model. If we're very lucky, it  may be that the anomalies currently observed by the LHCb in certain rare B-meson decays are already the first harbingers of a breakdown of the Standard Model at higher energies.

Finally, I should mention formal theoretical developments. The naturalness problem of the cosmological constant and of the Higgs mass may suggest some fundamental misunderstanding of quantum field theory on our part. Perhaps this should not be too surprising.  In many ways we have reached an amazing proficiency in QFT when applied to certain precision observables or even to LHC processes. Yet at the same time QFT is often used and taught in the same way as magic in Hogwarts: mechanically,  blindly following prescriptions from old dusty books, without a deeper understanding of the sense and meaning.  Recent years have seen a brisk development of alternative approaches: a revival of the old S-matrix techniques, new amplitude calculation methods based on recursion relations, but also complete reformulations of the QFT basics demoting the sacred cows like fields, Lagrangians, and gauge symmetry. Theory alone rarely leads to progress, but it may help to make more sense of the data we already have. Could better understanding or complete reformulating of QFT bring new answers to the old questions? I think that is  not impossible. 

All in all, there are good reasons to worry, but also tons of new data in store and lots of fascinating questions to answer.  How will the B-meson anomalies pan out? What shall we do after we hit the neutrino floor? Will the 21cm observations allow us to understand what dark matter is? Will China build a 100 TeV collider? Or maybe a radio telescope on the Moon instead?  Are experimentalists still needed now that we have machine learning? How will physics change with the centre of gravity moving to Asia?  I will tell you my take on such and other questions and  highlight old and new ideas that could help us understand the nature better.  Let's see how far I'll get this time ;)