tag:blogger.com,1999:blog-28465142334773995622024-02-20T20:42:19.831+01:00RÉSONAANCESParticle Physics BlogJesterhttp://www.blogger.com/profile/08947218566941608850noreply@blogger.comBlogger368125tag:blogger.com,1999:blog-2846514233477399562.post-1150593321369355222022-04-19T10:05:00.022+01:002022-04-19T14:52:29.100+01:00How large is the W mass anomaly <p>Everything is larger in the US: cars, homes, food portions, people. The CDF collaboration from the now defunct Tevatron collider <a href="https://inspirehep.net/literature/2064224">argues</a> that this phenomenon is rooted in fundamental physics: </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiy6iiAcg8s-PjxOWrbRdfJ-ALAjgJUcYuSJFDZsiI4gvIq0F8qAlcySyuvCUiNkIRzqd63LBgqUHmIgLxMutaRO73zboDvxZxb65leVpit1BRzOmg8zCB43C4gmiq1ptdgwOKWY2Pb1j5P7qeRXi0v3j27EWeP_sl8aq989KiyvmnQYLciNU02_572tA/s2954/WmassExperiments.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1724" data-original-width="2954" height="374" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiy6iiAcg8s-PjxOWrbRdfJ-ALAjgJUcYuSJFDZsiI4gvIq0F8qAlcySyuvCUiNkIRzqd63LBgqUHmIgLxMutaRO73zboDvxZxb65leVpit1BRzOmg8zCB43C4gmiq1ptdgwOKWY2Pb1j5P7qeRXi0v3j27EWeP_sl8aq989KiyvmnQYLciNU02_572tA/w640-h374/WmassExperiments.png" width="640" /></a></div><p>The plot shows the most precise measurements of the mass of the W boson - one of the fundamental particles of the Standard Model. The lone wolf is the new CDF result. It is clear that the W mass is larger around the CDF detector than in the canton of Geneva, and the effect is significant enough to be considered as evidence. More quantitatively, the CDF result is </p><p></p><ul style="text-align: left;"><li>3.0 sigma above the most precise LHC <a href="https://arxiv.org/abs/1701.07240">measurement</a> by the ATLAS collaboration. </li><li>2.4 sigma above the more recent LHC <a href="https://arxiv.org/abs/2109.01113">measurement</a> by the LHCb collaboration. </li><li>1.7 sigma above the <a href="https://arxiv.org/abs/1302.3415">combined</a> measurements from the four collaborations of the LEP collider. </li></ul><p></p><p>All in all, the evidence that the W boson is heavier in the US than in Europe stands firm. (For the sake of the script I will not mention here that the CDF result is also 2.4 sigma larger than the other Tevatron <a href="https://arxiv.org/abs/1310.8628">measurement</a> from the D0 collaboration, and 2.2 sigma larger than... the previous CDF <a href="https://arxiv.org/abs/1203.0275">measurement</a> from 10 years before.) </p><p>But jokes aside, what should we make of the current confusing situation? The tension between CDF and the combination of the remaining m<span style="font-size: xx-small;">W</span> measurements is whopping 4.1 sigma. What value of m<span style="font-size: xx-small;">W</span> should we then use in the Standard Model fits and new physics analyses? Certainly not the CDF one, some 6.5 away from the Standard Model prediction, because that value does not take into account the input from other experiments. At the same time we cannot just ignore CDF. In the end we do not know for sure who is right and who is wrong here. While most physicists tacitly assume that CDF has made a mistake, it is also conceivable that the other experiments have been suffering from the confirmation bias. Finally, a naive combination of all the results is not a sensible option either. Indeed, at face value the Gaussian combination leads to m<span style="font-size: xx-small;">W</span> = 80.410(7) GeV. This value is however not very meaningful from the statistical perspective: it's impossible to state, with 68 percent confidence, that the true value of the W mass is between 80.403 and 80.417 GeV. That range doesn't even overlap with either of the most precise measurements from CDF and ATLAS! (One should also be careful with Gaussian combinations because there can be subtle correlations between the different experimental results. Numerically, however, this should not be a big problem in the case at hand, as in the past the W mass results obtained via naive combinations were in fact very close to the more careful averages by <a href="https://pdglive.lbl.gov/">Particle Data Group</a>). Due to the disagreement between the experiments, our knowledge of the true value of m<span style="font-size: xx-small;">W </span>is degraded, and the combination should somehow account for that. </p><p>The question of combining information from incompatible measurements is a delicate one, residing at a boundary between statistics, psychology, and arts. Contradictory results are rare in collider physics, because of a small number of experiments and a high level of scrutiny. However, they are common in other branches of physics, just to mention the neutron lifetime or the electron g-2 as recent examples. To deal with such unpleasantness, Particle Data Group developed a totally ad hoc but very useful procedure. The idea is to penalize everyone in a democratic way, assuming that <i>all</i> experimental errors have been underestimated. More quantitatively, one inflates the errors of all the involved results until the χ^2 per degree of freedom in the combination is equal to 1. Applying this procedure to the W mass measurements, it is necessary to inflate the errors by the factor of S=2.1, which leads m<span style="font-size: xx-small;">W</span> = 80.410(15) GeV. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZR27zo5qafHnbzMP6OO6k_YNAW0RZQcgtiaa-3zEFnYDqcsG07iJQRaiePVmXXU0H4r6NPzopycY8AmY7NLs9Rn_zyz96Yk4nmwphXIJDz0T-ULH6i9F6mHZVwk0AE5YnUEg9rPGnAnKwtdRXcPbu1cRjoYmYMYN-jwrrHPXNgNoBF1d8NKManHfEEA/s2954/WmassCombination.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1898" data-original-width="2954" height="412" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZR27zo5qafHnbzMP6OO6k_YNAW0RZQcgtiaa-3zEFnYDqcsG07iJQRaiePVmXXU0H4r6NPzopycY8AmY7NLs9Rn_zyz96Yk4nmwphXIJDz0T-ULH6i9F6mHZVwk0AE5YnUEg9rPGnAnKwtdRXcPbu1cRjoYmYMYN-jwrrHPXNgNoBF1d8NKManHfEEA/w640-h412/WmassCombination.png" width="640" /></a></div><p>The inflated result make more intuitive sense, since the combined 1 sigma band overlaps with the most precise CDF measurement, and lies close enough to the error bars from other experiments. If you accept that combination, the tension with the Standard Model stands at 3 sigma. This value fairly well represents the current situation: it is large enough to warrant further interest, but not large enough to claim a discovery of new physics beyond the Standard Model. </p><p>The confusion may stay with us for long time. It will go away if CDF finds an error in their analysis, or if the future ATLAS updates shift m<span style="font-size: xx-small;">W</span> significantly upwards. But the most likely scenario in my opinion is that the Europe/US divide will only grow in time. The CDF result could be eliminated from the combination when other experiments reach a significantly better precision. Unfortunately, this is unlikely to happen in the foreseeable future; new colliders and better theory calculations may be necessary to shrink the error bars well below 10 MeV. The conclusion is that particle physicists should shake hands with their nuclear colleagues and start getting used to the S-factors. </p>Jesterhttp://www.blogger.com/profile/08947218566941608850noreply@blogger.com15tag:blogger.com,1999:blog-2846514233477399562.post-58649419377378825712021-04-08T09:50:00.017+01:002022-04-24T08:32:32.018+01:00 Why is it when something happens it is ALWAYS you, muons?<p>April 7, 2021 was like a good TV episode: high-speed action, plot twists, and a cliffhanger ending. We now <a href="https://arxiv.org/abs/2104.03281">know</a> that the strength of the little magnet inside the muon is described by the g-factor: </p><p style="text-align: center;">g = 2.00233184122(82).</p><p>Any measurement of basic properties of matter is priceless, especially when it come with this incredible precision. But for a particle physicist the main source of excitement is that this result could herald the breakdown of the Standard Model. The point is that the g-factor or the magnetic moment of an elementary particle can be calculated theoretically to a very good accuracy. Last year, the <a href="http://arxiv.org/abs/2006.04822 ">white paper</a> of the <i>Muon g−2 Theory Initiative</i> came up with the consensus value for the Standard Model prediction </p><p><span style="text-align: center;"><span><span> </span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> <span> </span></span>g</span> = 2.00233183620(86), </p><p style="text-align: left;">which is significantly smaller than the experimental value. The discrepancy is estimated at 4.2 sigma, assuming the theoretical error is Gaussian and combining the errors in quadrature. </p><p style="text-align: left;">As usual, when we see an experiment and the Standard Model disagree, these 3 things come to mind first</p><p style="text-align: left;"></p><ol style="text-align: left;"><li> Statistical fluctuation. </li><li> Flawed theory prediction. </li><li> Experimental screw-up. </li></ol><p></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxXRGm1k0jJ3bg3LomHLjjbzUIWq2IbUbf85OR1RsCizpdM9N7M73OeGi2qPNHVg2bR5EZu3k5GAE6qGC3IbrSQrKIhvsZfpDbJwK_q8g3zZxBSA2DHwpB2hXzQiYoVMRaVO8ibfhYmI8l/s1057/gmuon.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1057" data-original-width="971" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxXRGm1k0jJ3bg3LomHLjjbzUIWq2IbUbf85OR1RsCizpdM9N7M73OeGi2qPNHVg2bR5EZu3k5GAE6qGC3IbrSQrKIhvsZfpDbJwK_q8g3zZxBSA2DHwpB2hXzQiYoVMRaVO8ibfhYmI8l/s320/gmuon.png" /></a></div>The odds for 1. are extremely low in this case. 3. is not impossible but unlikely as of April 7. Basically the same experiment was repeated twice, first in Brookhaven 20 years ago, and now in Fermilab, yielding very consistent results. One day it would be nice to get an independent confirmation using alternative experimental techniques, but we are not losing any sleep over it. It is fair to say, however, that 2. is not yet written off by most of the community. The process leading to the Standard Model prediction is of enormous complexity. It combines technically challenging perturbative calculations (5-loop QED!), data-driven methods, and non-perturbative inputs from dispersion relations, phenomenological models, and lattice QCD. One especially difficult contribution to evaluate is due to loops of light hadrons (pions etc.) affecting photon propagation. In the white paper, this<i> hadronic vacuum polarization</i> is related by theoretical tricks to low-energy electron scattering and determined from experimental data. However, the currently most precise lattice evaluation of the same quantity gives a larger value that would take the Standard Model prediction closer to the experiment. The lattice paper first <a href="http://arxiv.org/abs/2002.12347">appeared</a> a year ago but only now was <a href="https://www.nature.com/articles/s41586-021-03418-1">published</a> in Nature in a well-timed move that can be compared to an ex crashing a wedding party. The theory and experiment are now locked in a three-way duel, and we are waiting for the shootout to see which theoretical prediction survives. Until this controversy is resolved, there will be a cloud of doubt hanging over every interpretation of the muon g-2 anomaly. <p></p><p> But let us assume for a moment that the white paper value is correct. This would be huge, as it would mean that the Standard Model does not fully capture how muons interact with light. The correct interaction Lagrangian would have to be (pardon my Greek)</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBHE5_vIta2JSkQsfeinSP89kjT-rOioltoIF3ac9zh3-Nqb9YI5XqDpGT15etIqdXHK3TU-axeeP7WQLKyR1P785MRmpgPzkpfcb1F7QUMcrQwkLsoLymnofXxzMP7cbXdQGutlCacZMG/s1334/b2cb2dfa8b74ef0e16f4dcf535ec5409.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="142" data-original-width="1334" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBHE5_vIta2JSkQsfeinSP89kjT-rOioltoIF3ac9zh3-Nqb9YI5XqDpGT15etIqdXHK3TU-axeeP7WQLKyR1P785MRmpgPzkpfcb1F7QUMcrQwkLsoLymnofXxzMP7cbXdQGutlCacZMG/s320/b2cb2dfa8b74ef0e16f4dcf535ec5409.png" width="320" /></a></div><p>The first term is the renormalizable minimal coupling present in the Standard Model, which gives the Coulomb force and all the usual electromagnetic phenomena. The second term is called the magnetic dipole. It leads to a small shift of the muon g-factor, so as to explain the Brookhaven and Fermilab measurements. This is a non-renormalizable interaction, and so it must be an effective description of virtual effects of some new particle from beyond the Standard Model. Theorists have invented countless models for this particle in order to address the old Brookhaven measurement, and the Fermilab update changes little in this enterprise. I will write about it another time. For now, let us just crunch some numbers to highlight one general feature. Even though the scale suppressing the effective dipole operator is in the EeV range, there are indications that the culprit particle is much lighter than that. First, electroweak gauge invariance forces it to be less than ~100 TeV in a rather model-independent way. Next, in many models contributions to muon g-2 come with the <i>chiral suppression</i> proportional to the muon mass. Moreover, they typically appear at one loop, so the operator will pick up a loop suppression factor unless the new particle is strongly coupled. The same dipole operator as above can be more suggestively recast as </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXXaxcoY01KstW8-bPR40Rzvp4C_wu0vZ5usUUgQkr9RhKBnVf75FTkJ0DiHgPBDL0iabTJWhJbjRGt0rk4bo5PBEnPeSbpKCDyTfbTQdyXKaI_nGVQoqwiy56EQ8TSKHnBJiM2Dia_V46/s1064/eq2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="143" data-original-width="1064" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXXaxcoY01KstW8-bPR40Rzvp4C_wu0vZ5usUUgQkr9RhKBnVf75FTkJ0DiHgPBDL0iabTJWhJbjRGt0rk4bo5PBEnPeSbpKCDyTfbTQdyXKaI_nGVQoqwiy56EQ8TSKHnBJiM2Dia_V46/s320/eq2.png" width="320" /></a></div><p>The scale 300 GeV appearing in the denominator indicates that the new particle should be around the corner! Indeed, the discrepancy between the theory and experiment is <i>larger </i>than the contribution of the W and Z bosons to the muon g-2, so it seems logical to put the new particle near the electroweak scale. That's why the stakes of the April 7 Fermilab announcement are so enormous. If the gap between the Standard Model and experiment is real, the new particles and forces responsible for it should be within reach of the present or near-future colliders. This would open a new experimental era that is almost too beautiful to imagine. And for theorists, it would bring new pressing questions about who ordered it. </p>Jesterhttp://www.blogger.com/profile/08947218566941608850noreply@blogger.com39tag:blogger.com,1999:blog-2846514233477399562.post-22934308694134774472021-04-01T10:04:00.007+01:002022-04-24T08:32:51.037+01:00April Fools'21: Trouble with g-2<p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsapwk6p8xk4aDV7u2bXjasCMxzxslNg9nvhlyz5naU8pXsy1AxFB9rloM8FefwQRv7oAbsi2P5_wPT8yRy5yLjWkmBhvpRkvmdw1zfAUERwd01qNBvhmBpvgLlC8Yo9f6YGMYl1NLf-hL/s2048/g-2.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em; text-align: center;"><img border="0" data-original-height="1388" data-original-width="2048" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsapwk6p8xk4aDV7u2bXjasCMxzxslNg9nvhlyz5naU8pXsy1AxFB9rloM8FefwQRv7oAbsi2P5_wPT8yRy5yLjWkmBhvpRkvmdw1zfAUERwd01qNBvhmBpvgLlC8Yo9f6YGMYl1NLf-hL/s320/g-2.png" width="320" /></a>On April 7, the <a href="https://muon-g-2.fnal.gov/ ">g-2 experiment</a> at Fermilab was supposed to reveal their new measurement of the magnetic moment of the muon. *Was*, because the announcement may be delayed for the most bizarre reason. You may have heard that the data are blinded to avoid biasing the outcome. This is now standard practice, but the g-2 collaboration went further: they are unable to unblind the data by themselves, to make sure that there is no leaks or temptations. Instead, the unblinding procedure requires an input from an external person, who is one of the Fermilab theorists. How does this work? The experiment measures the frequency of precession of antimuons circulating in a ring. From that and the known magnetic field the sought fundamental quantity - the magnetic moment of the muon, or g-2 in short - can be read off. However, the whole analysis chain is performed using a randomly chosen number instead of the true clock frequency. Only at the very end, once all statistical and systematic errors are determined, the true frequency is inserted and the final result is uncovered. For that last step they need to type the secret code into this machine looking like something from a 60s movie: </p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHcQXxEdewHgMdVH3hkDUNJC-33bbWoZ6717FBeRV2NF84DUnbVZMC5deC8RYoyth1jR4gMLmiM8BhyphenhyphennmmW_WeXCyEa2widOlrYTwSQW1S6DqWfGgT-k6cW3vt271LjbHaQjsNKu3gZFZs/" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img alt="" data-original-height="965" data-original-width="1600" height="193" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHcQXxEdewHgMdVH3hkDUNJC-33bbWoZ6717FBeRV2NF84DUnbVZMC5deC8RYoyth1jR4gMLmiM8BhyphenhyphennmmW_WeXCyEa2widOlrYTwSQW1S6DqWfGgT-k6cW3vt271LjbHaQjsNKu3gZFZs/" width="320" /></a></p><p>The code was picked by the Fermilab theorist, and he is the only person to know it. There is the rub... this theorist now refuses to give away the code. It is not clear why. One time he said he had forgotten the envelope with the code on a train, another time he said the dog had eaten it. For the last few days he has locked himself in his home and completely stopped taking any calls. </p><p>The situation is critical. PhD students from the collaboration are working round the clock to crack the code. They are basically trying out all possible combinations, but the process is painstakingly slow and may take months, delaying the long-expected announcement. The collaboration even got a permission from the Fermilab director to search the office of the said theorist. But they only found this piece of paper behind the bookshelf: </p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgj3C16r5RmWXZyy3G0zQh02YUYiHk6NoX4tUPRefcLQEbZSG2muVEYDL3NRyecMHLPED1mjz3IBpQY3hsuLNz1rDxqLum60oxqOwmcenXXTYvWYpXCJKnnIUsftO8JdWgGfA2RrOVwQXWA/" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img alt="" data-original-height="639" data-original-width="960" height="213" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgj3C16r5RmWXZyy3G0zQh02YUYiHk6NoX4tUPRefcLQEbZSG2muVEYDL3NRyecMHLPED1mjz3IBpQY3hsuLNz1rDxqLum60oxqOwmcenXXTYvWYpXCJKnnIUsftO8JdWgGfA2RrOVwQXWA/" width="320" /></a></p><p>It may be that the paper holds a clue about the code. If you have any idea what the code may be please email fermilab@fnal.gov or just write it in the comments below. </p><p><br /></p><p><b>Update: </b>a part of this post (but strangely enough not all) is an April Fools joke. The new g-2 results are going to be presented on April 7, 2021, as planned. The code is OPE, which stands for "operator product expansion", which is an important technique used in the theoretical calculation of hadronic corrections to muon g-2: </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjplVVZw-ugkvC0gh7v2BzjoGU2FkiCO0jHCb1fDBaFEtYNxSSmiDtXv7bBsZxTOPsUuR8q1CwQbaAeBXX5va5Npvkkb9_3EBVXCdNNrfzgoVvfqMnowCKyKPLJaxElCHupNL8BHCxGl_rc/" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="333" data-original-width="449" height="237" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjplVVZw-ugkvC0gh7v2BzjoGU2FkiCO0jHCb1fDBaFEtYNxSSmiDtXv7bBsZxTOPsUuR8q1CwQbaAeBXX5va5Npvkkb9_3EBVXCdNNrfzgoVvfqMnowCKyKPLJaxElCHupNL8BHCxGl_rc/" width="320" /></a></div><br /><br /><p></p>Jesterhttp://www.blogger.com/profile/08947218566941608850noreply@blogger.com11tag:blogger.com,1999:blog-2846514233477399562.post-49323645363694665332021-03-29T23:21:00.009+01:002021-04-24T12:08:09.506+01:00Thoughts on RK <p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6fC9MgElFRHg9w0yL7a-PEs2sztTzcWezc8Vw9GiSjMQ9hzYUB8egvpmI2Qv70p8uFq7opahM7V3svI4MRbGhRIcyMaP4BCfAir2uPAZ7MGaFINUkNkHw1eJvZHUTGj0y-r-4oNhCIoMd/" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em; text-align: center;"><img alt="" data-original-height="630" data-original-width="898" height="224" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6fC9MgElFRHg9w0yL7a-PEs2sztTzcWezc8Vw9GiSjMQ9hzYUB8egvpmI2Qv70p8uFq7opahM7V3svI4MRbGhRIcyMaP4BCfAir2uPAZ7MGaFINUkNkHw1eJvZHUTGj0y-r-4oNhCIoMd/w272-h224/RK.png" width="272" /></a>The hashtag <a href="https://twitter.com/hashtag/CautiouslyExcited?src=hashtag_click">#CautiouslyExcited</a> is trending on Twitter, in spite of the raging plague. The updated <a href="https://arxiv.org/abs/2103.11769" target="_blank">RK measurement</a> in LHCb has made a big splash and has been covered by every news outlet. R<span style="font-size: xx-small;">K</span> measures the ratio of the B->Kμμ and B->Kee decay probabilities, which the Standard Model predicts to be very close to one. Using all the data collected so far, LHCb instead finds R<span style="font-size: xx-small;">K</span> = 0.846 with the error of 0.044. This is the same central value and 30% smaller error compared to their 2019 result based on half of the data. Mathematically speaking, the update does not much change the global picture of the B-meson anomalies. However, it has an important psychological impact, which goes beyond the PR story of crossing the 3 sigma threshold. Let me explain why. </p><p>For the last few decades, every deviation from the Standard Model prediction in a particle collider experiment would mean one of these 3 things: </p><p></p><ol style="text-align: left;"><li>Statistical fluctuation. </li><li>Flawed theory prediction. </li><li>Experimental screw-up. </li></ol><p></p><p>In the case of R<span style="font-size: xx-small;">K</span>, the option 2. is not a worry. Yes, flavor physics is a swamp full of snake pits, however in the R<span style="font-size: xx-small;">K</span> ratio the dangerous hadronic uncertainties cancel out to a large extent, so that precise theoretical predictions are possible. Before March 23 the biggest worry was option 1. Indeed, 2-3 sigma fluctuations happen all the time at the LHC, due to a huge number of measurements being taken. However, you expect statistical fluctuations to <i>decrease</i> in significance as more data is collected. This is what seems to be happening to the sister R<span style="font-size: xx-small;">D</span> anomaly, and the earlier history of R<span style="font-size: xx-small;">K</span> was not very encouraging either (in the 2019 update the significance neither increased nor decreased). The fact that, this time, the significance of the R<span style="font-size: xx-small;">K</span> anomaly increased, more or less as you would expect it to assuming it is a genuine new physics signal, makes it unlikely that it is merely a statistical fluctuation. This is the main reason for the excitement you may perceive among particle physicists these days. </p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxZnJPHePCrPWddCzhZ30ZOYswV3qKd54bYk1Eiphv9rRq4xmYNxX0Zp6u-r7BZh2l8PSTKLjB0aJa8skR7TYi_i1aX9PBL1WMxP-U16ZnFh9kuaEUFZPW3qW8jGbir6bPBPMZTO1EV8Iv/" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em; text-align: center;"><img alt="" data-original-height="1890" data-original-width="2048" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxZnJPHePCrPWddCzhZ30ZOYswV3qKd54bYk1Eiphv9rRq4xmYNxX0Zp6u-r7BZh2l8PSTKLjB0aJa8skR7TYi_i1aX9PBL1WMxP-U16ZnFh9kuaEUFZPW3qW8jGbir6bPBPMZTO1EV8Iv/" width="260" /></a>On the other hand, option 3. remains a possibility. In their analysis, LHCb reconstructed 3850 B->Kμμ decays vs. 1640 B->Kee decays, but from that they concluded that decays to muons are <i>less </i>probable than those to electrons. This is because one has to take into account the different reconstruction efficiencies for muons and electrons. An estimate of that efficiency is the most difficult ingredient of the measurement, and the LHCb folks have spent many nights of heavy drinking worrying about it. Of course, they have made multiple cross-checks and are quite confident that there is no mistake but... there will always be a shadow of a doubt until R<span style="font-size: xx-small;">K</span> is confirmed by an independent experiment. Fortunately for everyone, a verification will be provided by the Belle-II experiment, probably in 3-4 years from now. Only when Belle-II sees the same thing we will breathe a sigh of relief and put all our money on option</p><p>4. Physics beyond the Standard Model </p><p>From that point of view explaining the R<span style="font-size: xx-small;">K</span> measurement is trivial. All we need is to add a new kind of interaction between b- and s-quarks and muons to the Standard Model Lagrangian. For example, this 4-fermion contact term will do: </p><p></p><div class="separator" style="clear: both; text-align: center;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhX-XiS1qrE93q1OUKaGFW_f9AiNyZk6XS-kJfzXsxYBqibSR_gbfPiHve-Az5MKBM3iiHq0h8RCNnQe06tf6j0ROmz7wveV2fDBOVF9fUFvVHEXgs5AXSYTs6MxvAOp-maxRAZW-scA0t3/" style="margin-left: 1em; margin-right: 1em;"><img alt="" data-original-height="189" data-original-width="1909" height="32" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhX-XiS1qrE93q1OUKaGFW_f9AiNyZk6XS-kJfzXsxYBqibSR_gbfPiHve-Az5MKBM3iiHq0h8RCNnQe06tf6j0ROmz7wveV2fDBOVF9fUFvVHEXgs5AXSYTs6MxvAOp-maxRAZW-scA0t3/" width="320" /></a></div></div><p></p><p>where Q<span style="font-size: xx-small;">3</span>=(t,b), Q<span style="font-size: xx-small;">2</span>=(c,s), L<span style="font-size: xx-small;">2</span>=(<span face="arial, sans-serif" style="background-color: white; color: #4d5156; font-size: 14px;">ν</span><span style="font-size: xx-small;">μ</span><span face="arial, sans-serif" style="background-color: white; color: #4d5156; font-size: 14px;">,</span>μ). The Standard Model won't let you have this interaction because it violates one of its founding principles: renormalizability. But we know that the Standard Model is just an effective theory, and that non-renormalizable interactions must exist in nature, even if they are very suppressed so as to be unobservable most of the time. In particular, neutrino oscillations are best explained by certain dimension-5 non-renormalizable interactions. R<span style="font-size: xx-small;">K</span> may be the first evidence that also dimension-6 non-renormalizable interactions exist in nature. The nice thing is that the interaction term above 1) does not violate any existing experimental constraints, 2) explains not only R<span style="font-size: xx-small;">K</span> but also some other 2-3 sigma tensions in the data (R<span style="font-size: xx-small;">K*</span>, P<span style="font-size: xx-small;">5</span>'), and 3) fits well with some smaller 1-2 sigma effects (Bs->μμ, R<span style="font-size: xx-small;">pK</span>,...). The existence of a simple theoretical explanation and a consistent pattern in the data is the other element that prompts cautious optimism. </p><p>The LHC run-3 is coming soon, and with it more data on R<span style="font-size: xx-small;">K. </span> In the shorter perspective (less than a year?) there will be other important updates (R<span style="font-size: xx-small;">K*, </span>R<span style="font-size: xx-small;">pK</span>) and new observables (R<span face="arial, sans-serif" style="background-color: white; color: #4d5156;"><span style="font-size: xx-small;">ϕ</span></span><span style="font-size: xx-small;"> , </span>R<span style="font-size: xx-small;">K*+</span>) probing the same physics. Finally something to wait for. </p>Jesterhttp://www.blogger.com/profile/08947218566941608850noreply@blogger.com18tag:blogger.com,1999:blog-2846514233477399562.post-31293351771758237172020-08-01T15:20:00.005+01:002021-03-30T10:56:50.995+01:00Death of a forgotten anomaly<div>Anomalies come with a big splash, but often go down quietly. A recent ATLAS measurement, just <a href="http://arxiv.org/abs/2007.14040">posted</a> on arXiv, killed a long-standing and by now almost forgotten anomaly from the LEP collider. LEP was an electron-positron collider operating some time in the late Holocene. Its most important legacy is the very precise measurements of the interaction strength between the Z boson and matter, which to this day are unmatched in accuracy. In the second stage of the experiment, called LEP-2, the collision energy was gradually raised to about 200 GeV, so that pairs of W bosons could be produced. The experiment was able to measure the branching fractions for W decays into electrons, muons, and tau leptons. These are precisely predicted by the Standard Model: they should be equal to 10.8%, independently of the flavor of the lepton (up to a very small correction due to the lepton masses). However, LEP-2 found </div><div><br /></div><div style="text-align: center;"><font color="#2b00fe">Br(W → τν)/Br(W → eν) = 1.070 ± 0.029, Br(W → τν)/Br(W → μν) = 1.076 ± 0.028.</font></div><div style="text-align: center;"><br /></div><div>While the decays to electrons and muons conformed very well to the Standard Model predictions, </div><div>there was a 2.8 sigma excess in the tau channel. The question was whether it was simply a statistical fluctuation or new physics violating the Standard Model's sacred principle of <i>lepton flavor universality</i>. The ratio Br(W → τν)/Br(W → eν) was later measured at the Tevatron, without finding any excess, however the errors were larger. More recently, there have been hints of large lepton flavor universality violation in B-meson decays, so it was not completely crazy to think that the LEP-2 excess was a part of the same story. </div><div><br /></div><div>The solution came 20 years later LEP-2: there is no large violation of lepton flavor universality in W boson decays. The LHC has already produced hundreds million of top quarks, and each of them (as far as we know) creates a W boson in the process of its disintegration. ATLAS used this big sample to compare the W boson decay rate to taus and to muons. Their result: </div><div><br /></div><div style="text-align: center;"><font color="#2b00fe">Br(W → τν)/Br(W → μν) = 0.992 ± 0.013.</font></div><div style="text-align: center;"><font color="#2b00fe"><br /></font></div><div>There is no slightest hint of an excess here. But what is most impressive is that the error is smaller, by more than a factor of two, than in LEP-2! After the W boson mass, this is another precision measurement where a dirty hadron collider environment achieves a better accuracy than an electron-positron machine. </div><div>Yes, more of that :) </div><div><br /></div><div>Thanks to the ATLAS measurement, our knowledge of the W boson couplings has increased substantially, as shown in the picture (errors are 1 sigma): </div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTnzqpZoy9QVVUJsDHMTjcQ95BqwEKCRIIfNMILwJX5U3WMto75hlrphqGdrrQBZFEpNQTt6qspCnr6HphN3TC8NjMafifoes2j_cFFhXuxnmdOTabbImyHejjHrjfgIU9L9j7CMX3JxG3/s576/WbosonCouplingsToLeptons.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="381" data-original-width="576" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTnzqpZoy9QVVUJsDHMTjcQ95BqwEKCRIIfNMILwJX5U3WMto75hlrphqGdrrQBZFEpNQTt6qspCnr6HphN3TC8NjMafifoes2j_cFFhXuxnmdOTabbImyHejjHrjfgIU9L9j7CMX3JxG3/s0/WbosonCouplingsToLeptons.png" /></a></div><div><br /></div><div>The current uncertainty is a few per mille. This is still worse than for the Z boson couplings to leptons, in which case the accuracy is better than per mille, but we're getting there... Within the present accuracy, the W boson couplings to all leptons are consistent with the Standard Model prediction, and with lepton flavor universality in particular. Some tensions appearing in earlier global fits are all gone. The Standard Model wins again, nothing to see here, we can move on to the next anomaly. </div><div><br /></div>Jesterhttp://www.blogger.com/profile/08947218566941608850noreply@blogger.com7tag:blogger.com,1999:blog-2846514233477399562.post-28261789758765556252020-06-17T15:06:00.001+01:002021-03-30T10:56:36.086+01:00Hail the XENON excessWhere were we... It's been years since particle physics last made an exciting headline. The <a href="http://arxiv.org/abs/2006.09721">result</a> announced today by the XENON collaboration is a welcome breath of fresh air. It's too early to say whether it heralds a real breakthrough, or whether it's another bubble to be burst. But it certainly gives food for thought for particle theorists, enough to keep hep-ph going for the next few months.<br />
<br />
The XENON collaboration was operating a 1-ton xenon detector in an underground lab in Italy. Originally, this line of experiments was devised to search for hypothetical heavy particles constituting dark matter, so called WIMPs. For that they offer a basically background-free environment, where a signal of dark matter colliding with xenon nuclei would stand out like a lighthouse. However all WIMP searches so far have returned zero, null, and nada. Partly out of boredom and despair, the xenon-based collaborations began thinking out-of-the-box to find out what else their shiny instruments could be good for. One idea <a href="http://arxiv.org/abs/1209.3810">was</a> to search for axions. These are hypothetical superlight and superweakly interacting particles, originally devised to plug a certain theoretical hole in the Standard Model of particle physics. If they exist, they should be copiously produced in the core of the Sun with energies of order a keV. This is too little to perceptibly knock an atomic nucleus, as xenon weighs over a hundred GeV. However, many variants of the axion scenario, in particular the popular <a href="https://inspirehep.net/literature/165061">DFSZ</a> model, predicts axions interacting with electrons. Then a keV axion may occasionally hit the cloud of electrons orbiting xenon atoms, sending one to an excited level or ionizing the atom. These electron-recoil events can be identified principally by the ratio of ionization and scintillation signals, which is totally different than for WIMP-like nuclear recoils. This is no longer a background-free search, as radioactive isotopes present inside the detector may lead to the same signal. Therefore collaboration have to search for a peak of electron-recoil events at keV energies. <br />
<br />
This is what they saw in the XENON1t data<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5zjHpWF4t720lSSGPGxnN4mJnb47W6oEkaxn1TEYp27XZlReWHxaVAOs9NzwtPyfUBxFKTa4QQH4jBpp8ZiZ0P0Ehb72UzoK2jOB7QfwiFcxF6VN33M4cVu59i5OcQvJ0MBf4ACY0maFz/s1600/XENON1t_spectrum.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="326" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5zjHpWF4t720lSSGPGxnN4mJnb47W6oEkaxn1TEYp27XZlReWHxaVAOs9NzwtPyfUBxFKTa4QQH4jBpp8ZiZ0P0Ehb72UzoK2jOB7QfwiFcxF6VN33M4cVu59i5OcQvJ0MBf4ACY0maFz/s400/XENON1t_spectrum.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Energy spectrum of electron-recoil events measured by the XENON1T experiment. </td></tr>
</tbody></table>
The expected background is approximately flat from 30 keV down to the detection threshold at 1 keV, below which it falls off abruptly. On the other hand, the data seem to show a signal component growing towards low energies, and possibly peaking at 1-2 keV. Concentrating on the 1-7 keV range (so with a bit of cherry-picking), 285 events is observed in the data compared to an expected 232 events from the background-only fit. In purely statistical terms, this is a 3.5 sigma excess.<br />
<br />
Assuming it's new physics, what does this mean? XENON shows that there is a flux of light relativistic particles arriving into their detector. The peak of the excess corresponds to the temperature in the core of the Sun (15 million kelvin = 1.3 keV), so our star is a natural source of these particles (but at this point XENON cannot prove they arrive from the Sun). Furthermore, the particles must couple to electrons, because they can knock xenon's electrons off their orbits. Several theoretical models contain particles matching that description.<b> Axions</b> are the primary suspects, because today they are arguably the best motivated extension of the Standard Model. They are naturally light, because their mass is protected by built-in symmetries, and for the same reason their coupling to matter must be extremely suppressed. For QCD axions the defining feature is their coupling to gluons, but in generic constructions one also finds the pseudoscalar-type interaction between the axion <i>a </i>and electrons <i>e</i>:<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitq4COf7sXWe1lOhLvesi9L4LWwEUjEV2mlpDvMC9iN-7qQIsdzL7y7zCwSHRWS9I_xtOJeumE3T6GMW9H8D2eqdy3HvKuvhEMmz0_qGWbUoJ7xmHwveZVb2d8E0iVRZpsl03NtcHlJ-hT/s1600/Eq2.png" imageanchor="1"><img border="0" height="32" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitq4COf7sXWe1lOhLvesi9L4LWwEUjEV2mlpDvMC9iN-7qQIsdzL7y7zCwSHRWS9I_xtOJeumE3T6GMW9H8D2eqdy3HvKuvhEMmz0_qGWbUoJ7xmHwveZVb2d8E0iVRZpsl03NtcHlJ-hT/s200/Eq2.png" width="200" /></a><br />
To explain the excess, one needs the coupling <i>g</i> to be of order 10^-12, which is totally natural in this context. But axions are by no means the only possibility. A related option is the <b>dark photon</b>, which differs from the axion by certain technicalities, in particular it has spin-1 instead of spin-0. The palette of viable models is certainly much broader, with the details to be found soon on arXiv. <br />
<br />
A distinct avenue to explain the XENON excess is<b> neutrinos</b>. Here, the advantage is that we already know that neutrinos exist, and that the Sun emits some 10^38 of them every second. In fact, the background model used by XENON includes 220 neutrino-induced events in the 1-210 keV range.<br />
However, in the standard picture, the interactions of neutrinos with electrons are too weak to explain the excess. To that end one has to either increase their flux (so fiddle with the solar model), or to increase their interaction strength with matter (so go beyond the Standard Model). For example, neutrinos could interact with electrons via a photon intermediary. While neutrinos do not have an electric charge, uncharged particles can still couple to photons via dipole or higher-multipole moments. It is possible that new physics (possibly the same that generates the neutrino masses) also pumps up the neutrino magnetic dipole moment. This can be described in a model-independent way by adding a non-renormalizable dimension-7 operator to the Standard Model, e.g.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQRz76wbe-wFsCq6mupU8el4aXMqcE_RXuZk_Ns5MGhFtEOe1iTkWgqYORhmAtEV62gDkSGe_vBTbhuV1PeQR0tgcF-AQyDIeWTttEd-zV2lFJdD02YfAa7Zns7h84CuRVPDD761NdVcY8/s1600/eq2.png" imageanchor="1"><img border="0" height="51" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQRz76wbe-wFsCq6mupU8el4aXMqcE_RXuZk_Ns5MGhFtEOe1iTkWgqYORhmAtEV62gDkSGe_vBTbhuV1PeQR0tgcF-AQyDIeWTttEd-zV2lFJdD02YfAa7Zns7h84CuRVPDD761NdVcY8/s640/eq2.png" width="640" /></a></div>
To explain the XENON excess we need <i>d</i> of order 10^-6. That mean new physics responsible for the dipole moment must be just behind the corner, below 100 TeV or so.<br />
<br />
How confident should we be that it's new physics? Experience has shown again and again that anomalies in new physics searches have, with a very large confidence, a mundane origin that does not involve exotic particles or interactions. In this case, possible explanations are, in order of likelihood, 1) small contamination of the detector, 2) some other instrumental effect that the collaboration hasn't thought of, 3) the ghost of Roberto Peccei, 4) a genuine signal of new physics. In fact, the collaboration itself is hedging for the first option, as they cannot exclude the presence of a small amount of tritium in the detector, which would produce a signal similar to the observed excess. Moreover, there are a few orange flags for the new physics interpretation:<br />
<ol>
<li> Simplest models explaining the excess are excluded by astrophysical observations. If axions can be produced in the Sun at the rate suggested by the XENON result, they can be produced at even larger rates in hotter stars, e.g. in red giants or white dwarfs. This would lead to excessive cooling of these stars, in conflict with observations. The upper limit on the axion-electron coupling<i> g </i>from red giants is 3*10^-13, which is an order of magnitude less than what is needed for the XENON excess. The neutrino magnetic moment explanations faces a similar difficulty. Of course, astrophysical limits reside in a different epistemological reality; it is not unheard of that they are relaxed by an order of magnitude or disappear completely. But certainly this is something to worry about. </li>
<li> At a more psychological level, a small excess over a large background near a detection threshold.... sounds familiar. We've seen that before in the case of the DAMA and CoGeNT dark matter experiments, at it didn't turn out well. </li>
<li>The bump is at 1.5 keV, which is *twice* 750 eV. </li>
</ol>
So, as usual, more data, time, and patience is needed to verify the new physics hypothesis. On the experimental side, the near future is very optimistic, with the XENONnT, LUX-ZEPLIN, and PandaX-4T experiments all jostling for position to confirm the excess and earn eternal glory. On the theoretical side, the big question is whether the stellar cooling constraints can be avoided, without too many epicycles. It would be also good to know whether the particle responsible for the XENON excess could be related to dark matter and/or to other existing anomalies, in particular to the B-meson ones. For answers, tune in to arXiv, from tomorrow on. Jesterhttp://www.blogger.com/profile/08947218566941608850noreply@blogger.com44tag:blogger.com,1999:blog-2846514233477399562.post-81241067758046122002018-06-20T13:32:00.000+01:002020-06-17T22:30:50.307+01:00Both g-2 anomalies<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIwK6IzHBEPGNx07Q5S86njsjejmxUz-44bCP1PoKqODz8CvFcobG2l_GPc9KBiNOo8fvVWHqVJ-Y-Iat6-csM5QwtZYcAXLwsK64PN9V6sN9iccKJAIoA6k8Oihb5wzLh2ExPuBEFwxDH/s1600/Parker_Alpha.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="210" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIwK6IzHBEPGNx07Q5S86njsjejmxUz-44bCP1PoKqODz8CvFcobG2l_GPc9KBiNOo8fvVWHqVJ-Y-Iat6-csM5QwtZYcAXLwsK64PN9V6sN9iccKJAIoA6k8Oihb5wzLh2ExPuBEFwxDH/s400/Parker_Alpha.png" width="400" /></a>Two months ago an experiment in Berkeley <a href="http://science.sciencemag.org/content/360/6385/191">announced</a> a new ultra-precise measurement of the fine structure constant α using interferometry techniques. This wasn't much noticed because the paper is not on arXiv, and moreover this kind of research is filed under <i>metrology</i>, which is easily confused with meteorology. So it's worth commenting on why precision measurements of α could be interesting for particle physics. What the Berkeley group really did was to measure the mass of the cesium-133 atom, achieving the relative accuracy of 4*10^-10, that is 0.4 parts par billion (ppb). With that result in hand, α can be determined after a cavalier rewriting of the high-school formula for the Rydberg constant: <br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhie6yOm8WkKsqhOCK-S3cPG3F-9csSDXGixyO4HH7D-03EOUtxohEjgQOksYGSpSNAy8BczNe9x9fVvEnkJ6TfO21ZqPNkGlu-3C-lkslfFNKGOZtmtPiko5RXLJzVDWHHk6b0TdSON-oq/s1600/Eq2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="81" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhie6yOm8WkKsqhOCK-S3cPG3F-9csSDXGixyO4HH7D-03EOUtxohEjgQOksYGSpSNAy8BczNe9x9fVvEnkJ6TfO21ZqPNkGlu-3C-lkslfFNKGOZtmtPiko5RXLJzVDWHHk6b0TdSON-oq/s400/Eq2.png" width="400" /></a></div>
Everybody knows the first 3 digits of the Rydberg constant, Ry≈13.6 eV, but actually it is experimentally known with the fantastic accuracy of 0.006 ppb, and the electron-to-atom mass ratio has also been determined precisely. Thus the measurement of the cesium mass can be translated into a 0.2 ppb measurement of the fine structure constant: 1/α=137.035999046(27).<br />
<br />
You may think that this kind of result could appeal only to a Pythonesque chartered accountant. But you would be wrong. First of all, the new result excludes α = 1/137 at 1 million sigma, dealing a mortal blow to the field of epistemological numerology. Perhaps more importantly, the result is relevant for testing the Standard Model. One place where precise knowledge of α is essential is in calculation of the magnetic moment of the electron. Recall that the <i>g-factor</i> is defined as the proportionality constant between the magnetic moment and the angular momentum. For the electron we have<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBO4NRS45_ypWNa8A06-En5rqXUyoADcYh-py2oxBVSe4G1vrRb_hGjahyphenhyphenUFXJaO2Z9Q9DDlaQFJsA8c8d-X77o1rXF9HNJR1-np464YuThBBu2SPbsw50_UHw4t4R9ime4Jo0R5JolE2_/s1600/Eq1.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="68" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBO4NRS45_ypWNa8A06-En5rqXUyoADcYh-py2oxBVSe4G1vrRb_hGjahyphenhyphenUFXJaO2Z9Q9DDlaQFJsA8c8d-X77o1rXF9HNJR1-np464YuThBBu2SPbsw50_UHw4t4R9ime4Jo0R5JolE2_/s640/Eq1.png" width="640" /></a>Experimentally, g<i><span style="font-size: x-small;">e</span></i> is one of the most precisely determined quantities in physics, with the most recent <a href="http://arxiv.org/abs/0801.1134">measurement</a> quoting <i>a<span style="font-size: x-small;">e </span></i>= 0.00115965218073(28), that is 0.0001 ppb accuracy on g<i><span style="font-size: x-small;">e, </span></i>or 0.2 ppb accuracy on <i>a<span style="font-size: x-small;">e</span></i>. In the Standard Model, g<i><span style="font-size: x-small;">e</span></i> is calculable as a function of α and other parameters. In the classical approximation g<i><span style="font-size: x-small;">e</span></i>=2, while the one-loop correction proportional to the first power of α was already known in prehistoric times thanks to Schwinger. The dots above summarize decades of subsequent calculations, which now <a href="http://arxiv.org/abs/1712.06060">include</a> O(α^5) terms, that is 5-loop QED contributions! Thanks to these heroic efforts (depicted in the film <i>For a Few Diagrams More</i> - a sequel to Kurosawa's <i>Seven Samurai</i>), the main theoretical uncertainty for the Standard Model prediction of g<i><span style="font-size: x-small;">e</span></i> is due to the experimental error on the value of α. The Berkeley measurement allows one to reduce the relative theoretical error on <i>a<span style="font-size: x-small;">e </span></i>down to 0.2 ppb: <i>a<span style="font-size: x-small;">e</span></i> = 0.00115965218161(23), which matches in magnitude the experimental error and improves by a factor of 3 the previous prediction based on the α measurement with rubidium atoms.<br />
<br />
At the spiritual level, the comparison between the theory and experiment provides an impressive validation of quantum field theory techniques up to the 13th significant digit - an unimaginable theoretical accuracy in other branches of science. More practically, it also provides a powerful test of the Standard Model. New particles coupled to the electron may contribute to the same loop diagrams from which g<i><span style="font-size: x-small;">e</span></i> is calculated, and could shift the observed value of <i>a<span style="font-size: x-small;">e</span></i> away from the Standard Model predictions. In many models, corrections to the electron and muon magnetic moments are correlated. The latter famously deviates from the Standard Model prediction by <a href="http://arxiv.org/abs/1706.09436">3.5</a> to <a href="http://arxiv.org/abs/1804.07409">4</a> sigma, depending on who counts the uncertainties. Actually, if you bother to eye carefully the experimental and theoretical values of <i>a<span style="font-size: x-small;">e</span></i> beyond the 10th significant digit you can see that they are also discrepant, this time at the 2.5 sigma level. So now we have <b>two</b> g-2 anomalies! In a picture, the situation can be summarized as follows:<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5Sqg45kxT4KSgnXmQkJ-3sk9KD79rdGVkWlSJBpb5kkQ1dRgGLXRKrVBrKvB3LGSw4Gwk_dQ9d5j-gDy1pCWSM9ahbGRrZ7E0EGQXP4Ic366StxZP2LYCqgRkRpiCofk8IZKuzD_9-yNh/s1600/gminus2.png" imageanchor="1"><img border="0" height="638" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5Sqg45kxT4KSgnXmQkJ-3sk9KD79rdGVkWlSJBpb5kkQ1dRgGLXRKrVBrKvB3LGSw4Gwk_dQ9d5j-gDy1pCWSM9ahbGRrZ7E0EGQXP4Ic366StxZP2LYCqgRkRpiCofk8IZKuzD_9-yNh/s640/gminus2.png" width="640" /></a><br />
<div class="separator" style="clear: both; text-align: center;">
</div>
If you're a member of the Holy Church of Five Sigma you can almost preach an unambiguous discovery of physics beyond the Standard Model. However, for most of us this is not the case yet. First, there is still some debate about the theoretical uncertainties entering the muon g-2 prediction. Second, while it is quite easy to fit each of the two anomalies separately, there seems to be no appealing model to fit both of them at the same time. Take for example the very popular toy model with a new massive spin-1 Z' boson (aka the dark photon) kinetically mixed with the ordinary photon. In this case Z' has, much like the ordinary photon, vector-like and universal couplings to electron and muons. But this leads to a <i>positive</i> contribution to g-2, and it does not fit well the <i>a<span style="font-size: x-small;">e</span></i> measurement which favors a new negative contribution. In fact, the <i>a<span style="font-size: x-small;">e</span></i> measurement provides the most stringent constraint in part of the parameter space of the dark photon model. Conversely, a Z' boson with purely axial couplings to matter does not fit the data as it gives a negative contribution to g-2, thus making the muon g-2 anomaly worse. What might work is a hybrid model with a light Z' boson having lepton-flavor violating interactions: a vector coupling to muons and a somewhat smaller axial coupling to electrons. But constructing a consistent and realistic model along these lines is a challenge because of other experimental constraints (e.g. from the lack of observation of μ→eγ decays). Some food for thought can be found in <a href="http://arxiv.org/abs/1609.09072">this paper</a>, but I'm not sure if a sensible model exists at the moment. If you know one you are welcome to drop a comment here or a paper on arXiv.<br />
<br />
More excitement on this front is in store. The muon g-2 experiment in Fermilab should soon deliver first results which may confirm or disprove the muon anomaly. Further progress with the electron g-2 and fine-structure constant measurements is also expected in the near future. The biggest worry is that, if the accuracy improves by another two orders of magnitude, we will need to calculate six loop QED corrections... Unknownnoreply@blogger.com19tag:blogger.com,1999:blog-2846514233477399562.post-28154736025290667512018-06-05T19:23:00.000+01:002020-06-17T22:32:16.980+01:00Can MiniBooNE be right? The experimental situation in neutrino physics is confusing. One one hand, a host of neutrino experiments has established a consistent picture where the neutrino mass eigenstates are mixtures of the 3 Standard Model neutrino flavors <i>νe, νμ, ντ</i>. The measured mass differences between the eigenstates are Δm<span style="font-size: xx-small;">12</span>^2 ≈ 7.5*10^-5 eV^2 and Δm<span style="font-size: xx-small;">13</span>^2 ≈ 2.5*10^-3 eV^2, suggesting that all Standard Model neutrinos have masses below 0.1 eV. That is well in line with cosmological observations which find that the radiation budget of the early universe is consistent with the existence of exactly 3 neutrinos with the sum of the masses less than 0.2 eV. On the other hand, several rogue experiments refuse to conform to the standard 3-flavor picture. The most severe anomaly is the appearance of electron neutrinos in a muon neutrino beam observed by the LSND and MiniBooNE experiments.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcFa9VZLO06MkvQJ0xcGZoydUp0x_RBpV3tRoV92kgAhVeT-NLE-_5LNQJ-HX-uPI5iZaWV1no2u5w72ZzU0yvIidowW8q_OpHpGucXQkaJURt3pdq_nC-HoxUNtGrQVqBpNCoEACJs5s3/s1600/miniboom.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="245" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcFa9VZLO06MkvQJ0xcGZoydUp0x_RBpV3tRoV92kgAhVeT-NLE-_5LNQJ-HX-uPI5iZaWV1no2u5w72ZzU0yvIidowW8q_OpHpGucXQkaJURt3pdq_nC-HoxUNtGrQVqBpNCoEACJs5s3/s320/miniboom.png" width="320" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcFa9VZLO06MkvQJ0xcGZoydUp0x_RBpV3tRoV92kgAhVeT-NLE-_5LNQJ-HX-uPI5iZaWV1no2u5w72ZzU0yvIidowW8q_OpHpGucXQkaJURt3pdq_nC-HoxUNtGrQVqBpNCoEACJs5s3/s1600/miniboom.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><br /></a>This story begins in the previous century with the LSND experiment in Los Alamos, which <a href="http://arxiv.org/abs/hep-ex/0104049">claimed </a>to observe <i>νμ</i>→<i>νe</i> antineutrino oscillations with 3.8σ significance. This result was considered controversial from the very beginning due to limitations of the experimental set-up. Moreover, it was inconsistent with the standard 3-flavor picture which, given the masses and mixing angles measured by other experiments, predicted that <i>νμ</i>→<i>νe</i> oscillation should be unobservable in short-baseline (L ≼ km) experiments. The MiniBooNE experiment in Fermilab was conceived to conclusively prove or disprove the LSND anomaly. To this end, a beam of mostly muon neutrinos or antineutrinos with energies E~1 GeV is sent to a detector at the distance L~500 meters away. In general, neutrinos can change their flavor with the probability oscillating as P ~ sin^2(Δm^2 L/4E). If the LSND excess is really due to neutrino oscillations, one expects to observe electron neutrino appearance in the MiniBooNE detector given that L/E is similar in the two experiments. Originally, MiniBooNE was hoping to see a smoking gun in the form of an electron neutrino excess <i>oscillating</i> as a function of L/E, that is peaking at intermediate energies and then <i>decreasing</i> towards lower energies (possibly with several wiggles). That didn't happen. Instead, MiniBooNE finds an excess <i>increasing</i> towards low energies with a similar shape as the backgrounds. Thus the confusion lingers on: the LSND anomaly has neither been killed nor robustly confirmed. <br />
<br />
In spite of these doubts, the LSND and MiniBooNE anomalies continue to arouse interest. This is understandable: as the results do not fit the 3-flavor framework, if confirmed they would prove the existence of new physics beyond the Standard Model. The simplest fix would be to introduce a sterile neutrino <i>νs</i> with the mass in the eV ballpark, in which case MiniBooNE would be observing the <i>νμ</i>→<i>νs</i>→<i>νe</i> oscillation chain. With the recent MiniBooNE <a href="http://arxiv.org/abs/1805.12028">update</a> the evidence for the electron neutrino appearance increased to 4.8σ, which has stirred some commotion on Twitter and in the blogosphere. However, I find the excitement a bit misplaced. The anomaly is not really new: similar results showing a 3.8σ excess of <i>νe</i>-like events were already <a href="http://arxiv.org/abs/1207.4809">published</a> in 2012. The increase of the significance is hardly relevant: at this point we know anyway that the excess is not a statistical fluke, while a systematic effect due to underestimated backgrounds would also lead to a growing anomaly. If anything, there are now <i>less</i> reasons than in 2012 to believe in the sterile neutrino origin the MiniBooNE anomaly, as I will argue in the following.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTg1aCv9PpLkGGWk20AuLmpCpi3IEqywArncsilvJxP5H7eJIgCbXgdG2HWUeeqSUGIGSfmTZW4XR0hxV-mJpEYu-axxiANNuT8ZIsIn0SDcLH50RYQn5VJnaG9glbmrEv42bnUQ65Vg1W/s1600/nu1.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="205" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTg1aCv9PpLkGGWk20AuLmpCpi3IEqywArncsilvJxP5H7eJIgCbXgdG2HWUeeqSUGIGSfmTZW4XR0hxV-mJpEYu-axxiANNuT8ZIsIn0SDcLH50RYQn5VJnaG9glbmrEv42bnUQ65Vg1W/s320/nu1.png" width="320" /></a>What has changed since 2012? First, there are new <a href="http://arxiv.org/abs/1303.3953">constraints</a> on <i>νe</i> appearance from the OPERA experiment (yes, this OPERA) who did not see any excess <i>νe</i> in the CERN-to-Gran-Sasso <i>νμ</i> beam. This excludes a large chunk of the relevant parameter space corresponding to large mixing angles between the active and sterile neutrinos. From this point of view, the MiniBooNE update actually adds more stress on the sterile neutrino interpretation by slightly shifting the preferred region towards larger mixing angles... Nevertheless, a not-too-horrible fit to all appearance experiments can still be achieved in the region with Δm^2~0.5 eV^2 and the mixing angle sin^2(2θ) of order 0.01. <br />
<br />
Next, the cosmological constraints have become more stringent. The CMB observations by the Planck satellite do not leave room for an additional neutrino species in the early universe. But for the parameters preferred by LSND and MiniBooNE, the sterile neutrino would be abundantly produced in the hot primordial plasma, thus violating the Planck constraints. To avoid it, theorists need to deploy a battery of tricks (for example, large sterile-neutrino self-interactions), which makes realistic models rather baroque.<br />
<br />
But the killer punch is delivered by disappearance analyses. Benjamin Franklin famously said that only two things in this world were certain: death and probability conservation. Thus whenever an electron neutrino appears in a <i>νμ</i> beam, a muon neutrino must disappear. However, the latter process is severely constrained by long-baseline neutrino experiments, and recently the limits have been further strengthened thanks to the <a href="http://arxiv.org/abs/1710.06488">MINOS</a> and <a href="http://arxiv.org/abs/1605.01990">IceCube</a> collaborations. A recent combination of the existing disappearance results is available in <a href="http://arxiv.org/abs/1803.10661">this paper</a>. In the 3+1 flavor scheme, the probability of a muon neutrino transforming into an electron one in a short-baseline experiment is<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjODiAKC7G47d5uvJnKSy_5hDCwbRp9zDpXEi5gsG7P0Z-LWgSx2bu9fXdk51C_5ZZvoa3A1Ko0MmLVAVB00Wfnh_y5lcwVToDPAZ09LTuCy1SI75B-2aL-1AozcozeUOUJEOXRbpOwRlsU/s1600/eq1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="36" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjODiAKC7G47d5uvJnKSy_5hDCwbRp9zDpXEi5gsG7P0Z-LWgSx2bu9fXdk51C_5ZZvoa3A1Ko0MmLVAVB00Wfnh_y5lcwVToDPAZ09LTuCy1SI75B-2aL-1AozcozeUOUJEOXRbpOwRlsU/s400/eq1.png" width="400" /></a></div>
where <i>U</i> is the 4x4 neutrino mixing matrix. The U<span style="font-size: x-small;">μ4</span> matrix elements controls also the νμ survival probability<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJrHTAuvk2c0M74TKSahv4pwWS2MfSZMBByPr5WxEa-nX0SP7o_6rRhXXaYPbnn97kSRzJRdjCa_2qxvy0-8-jOThLUQAtkOZ1eLLW3ZDEi9v_I3eApmJNY1IpRKKASXjx5zjkMVAcRg8d/s1600/eq2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="48" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJrHTAuvk2c0M74TKSahv4pwWS2MfSZMBByPr5WxEa-nX0SP7o_6rRhXXaYPbnn97kSRzJRdjCa_2qxvy0-8-jOThLUQAtkOZ1eLLW3ZDEi9v_I3eApmJNY1IpRKKASXjx5zjkMVAcRg8d/s400/eq2.png" width="400" /></a></div>
The <i>νμ</i> disappearance data from MINOS and IceCube imply |U<span style="font-size: x-small;">μ4</span>|≼0.1, while |U<span style="font-size: x-small;">e4</span>|≼0.25 from solar neutrino observations. All in all, the disappearance results imply that the effective mixing angle sin^2(2θ) controlling the <i>νμ</i>→<i>νs</i>→<i>νe</i> oscillation must be much smaller than 0.01 required to fit the MiniBooNE anomaly. The disagreement between the appearance and disappearance data had already <a href="http://arxiv.org/abs/1803.10661">existed</a> before, but was actually made <i>worse </i>by the MiniBooNE update.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQ3W6IG93Kd_zaQyH_CfsF9YwWEgXtVkCHFmT4XuvZ7MFEpzD6tQ7tiJvEe7bXegMu6H4HkT-vF1bymHcUYWjmyi9AModnx49ZOaYEj1iU8FJjB4nQNWgR3P60r6T5qBY29kpucQWi1VWs/s1600/nu2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="410" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQ3W6IG93Kd_zaQyH_CfsF9YwWEgXtVkCHFmT4XuvZ7MFEpzD6tQ7tiJvEe7bXegMu6H4HkT-vF1bymHcUYWjmyi9AModnx49ZOaYEj1iU8FJjB4nQNWgR3P60r6T5qBY29kpucQWi1VWs/s640/nu2.png" width="640" /></a></div>
So the hypothesis of a 4th sterile neutrino does not stand scrutiny as an explanation of the MiniBooNE anomaly. It does not mean that there is no other possible explanation (more sterile neutrinos? non-standard interactions? neutrino decays?). However, any realistic model will have to delve deep into the crazy side in order to satisfy the constraints from other neutrino experiments, flavor physics, and cosmology. Fortunately, the current confusing situation should not last forever. The MiniBooNE photon background from π<span style="font-size: xx-small;">0</span> decays may be clarified by the ongoing MicroBooNE experiment. On the timescale of a few years the controversy should be closed by the SBN program in Fermilab, which will add one near and one far detector to the MicroBooNE beamline. Until then... years of painful experience have taught us to assign a high prior to the Standard Model hypothesis. Currently, by far the most plausible explanation of the existing data is an experimental error on the part of the MiniBooNE collaboration.Unknownnoreply@blogger.com14tag:blogger.com,1999:blog-2846514233477399562.post-44618098720929703272018-05-28T13:35:00.003+01:002020-06-17T22:32:40.037+01:00WIMPs after XENON1TAfter today's update from the XENON1T experiment, the situation on the front of direct detection of WIMP dark matter is as follows<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRrPuZGonvXHAlInGJWpks488kzAgyMuSg-ha75xHp82eNWrmoAUDNIF9vu6DLMvC3B4Jn82SUzY9XKkWeb-oSIPCOjAs1pJg14ZpJ253ut-nS5vqufwRdK-zjNJcs55lK1xRTtSIAQCs0/s1600/WIMPdarkmatter.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRrPuZGonvXHAlInGJWpks488kzAgyMuSg-ha75xHp82eNWrmoAUDNIF9vu6DLMvC3B4Jn82SUzY9XKkWeb-oSIPCOjAs1pJg14ZpJ253ut-nS5vqufwRdK-zjNJcs55lK1xRTtSIAQCs0/s640/WIMPdarkmatter.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
WIMP can be loosely defined as a dark matter particle with mass in the 1 GeV - 10 TeV range and significant interactions with ordinary matter. Historically, WIMP searches have stimulated enormous interest because this type of dark matter can be easily realized in models with low scale supersymmetry. Now that we are older and wiser, many physicists would rather put their money on other realizations, such as axions, MeV dark matter, or primordial black holes. Nevertheless, WIMPs remain a viable possibility that should be further explored.<br />
<br />
To detect WIMPs heavier than a few GeV, currently the most successful strategy is to use huge detectors filled with xenon atoms, hoping one of them is hit by a passing dark matter particle. Xenon1T beats the competition from the <a href="http://arxiv.org/abs/1608.07648">LUX</a> and <a href="http://arxiv.org/abs/1708.06917">Panda-X</a> experiments because it has a bigger <strike>gun</strike> tank. Technologically speaking, we have come a long way in the last 30 years. XENON1T is now sensitive to 40 GeV WIMPs interacting with nucleons with the cross section of 40 yoctobarn (1 yb = 10^-12 pb = 10^-48 cm^2). This is 6 orders of magnitude better than what the first direct detection experiment in the <a href="http://inspirehep.net/search?p=recid:253104&of=hd">Homestake</a> mine could achieve back in the 80s. Compared to the last year, the limit is better by a factor of two at the most sensitive mass point. At high mass the improvement is somewhat smaller than expected due to a small excess of events observed by XENON1T, which is probably just a 1 sigma upward fluctuation of the background.<br />
<br />
What we are learning about WIMPs is how they can (or cannot) interact with us. Of course, at this point in the game we don't see qualitative progress, but rather incremental quantitative improvements. One possible scenario is that WIMPs experience one of the Standard Model forces, such as the <i>weak</i> or the <i>Higgs</i> force. The former option is strongly constrained by now. If WIMPs had interacted in the same way as our neutrino does, that is by exchanging a Z boson, it would have been found in the Homestake experiment. Xenon1T is probing models where the dark matter coupling to the Z boson is suppressed by a factor c<span style="font-size: x-small;">χ</span> ~ 10^-3 - 10^-4 compared to that of an active neutrino. On the other hand, dark matter could be participating in weak interactions only by exchanging W bosons, which can happen for example when it is a part of an SU(2) triplet. In the plot you can see that XENON1T is approaching but not yet excluding this interesting possibility. As for models using the Higgs force, XENON1T is probing the (subjectively) most natural parameter space where WIMPs couple with order one strength to the Higgs field. <br />
<br />
And the arms race continues. The search in XENON1T will go on until the end of this year, although at this point a discovery is extremely unlikely. Further progress is expected on a timescale of a few years thanks to the next generation xenon detectors XENONnT and <a href="http://arxiv.org/abs/1703.09144">LUX-ZEPLIN</a>, which should achieve yoctobarn sensitivity. <a href="http://arxiv.org/abs/1606.07001">DARWIN</a> may be the ultimate experiment along these lines, in the sense that <strike>there is no prefix smaller than yocto</strike> it will reach the irreducible background from atmospheric neutrinos, after which new detection techniques will be needed. For dark matter mass closer to 1 GeV, several orders of magnitude of pristine parameter space will be covered by the <a href="http://arxiv.org/abs/1610.00006">SuperCDMS</a> experiment. Until then we are kept in suspense. Is dark matter made of WIMPs? And if yes, does it stick above the neutrino sea?Unknownnoreply@blogger.com20tag:blogger.com,1999:blog-2846514233477399562.post-1746316115833165002018-05-16T22:14:00.000+01:002020-06-17T22:32:03.441+01:00Proton's weak charge, and what's it for <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjguqlab_DVDUXS0JwNxq6APwyJesnVqRll3gfk5YYgtDm6xinshk_WdaKJDUfFznmuJ-lbRZ6k3XkC_RLZrDW9u0dsoxpZhEtWOkr2yfnLRUKZ5l9C6TwTa58dmSpcqodErN0xeyfjbZg1/s1600/qweak.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="203" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjguqlab_DVDUXS0JwNxq6APwyJesnVqRll3gfk5YYgtDm6xinshk_WdaKJDUfFznmuJ-lbRZ6k3XkC_RLZrDW9u0dsoxpZhEtWOkr2yfnLRUKZ5l9C6TwTa58dmSpcqodErN0xeyfjbZg1/s320/qweak.png" width="320" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjguqlab_DVDUXS0JwNxq6APwyJesnVqRll3gfk5YYgtDm6xinshk_WdaKJDUfFznmuJ-lbRZ6k3XkC_RLZrDW9u0dsoxpZhEtWOkr2yfnLRUKZ5l9C6TwTa58dmSpcqodErN0xeyfjbZg1/s1600/qweak.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><br /></a>In the particle world the LHC still attracts the most attention, but in parallel there is ongoing progress at the low-energy frontier. A new episode in that story is the Qweak experiment in Jefferson Lab in the US, which just <a href="https://www.nature.com/articles/s41586-018-0096-0">published</a> their final results. Qweak was shooting a beam of 1 GeV electrons on a hydrogen (so basically proton) target to determine how the scattering rate depends on electron's polarization. Electrons and protons interact with each other via the electromagnetic and weak forces. The former is much stronger, but it is parity-invariant, i.e. it does not care about the direction of polarization. On the other hand, since the classic <a href="https://en.wikipedia.org/wiki/Wu_experiment">Wu</a> experiment in 1956, the weak force is known to violate parity. Indeed, the Standard Model postulates that the Z boson, who mediates the weak force, couples with different strength to left- and right-handed particles. The resulting asymmetry between the low-energy electron-proton scattering cross sections of left- and right-handed polarized electrons is predicted to be at the 10^-7 level. That has been experimentally observed many times before, but Qweak was able to measure it with the best precision to date (relative 4%), and at a lower momentum transfer than the previous experiments. <br />
<br />
What is the point of this exercise? Low-energy parity violation experiments are often sold as precision measurements of the so-called Weinberg angle, which is a function of the electroweak gauge couplings - the fundamental parameters of the Standard Model. I don't like too much that perspective because the electroweak couplings, and thus the Weinberg angle, can be more precisely determined from other observables, and Qweak is far from achieving a competing accuracy. The utility of Qweak is better visible in the effective theory picture. At low energies one can parameterize the relevant parity-violating interactions between protons and electrons by the contact term<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgT6MT58eawzO1hKq97xiZHnN3EIhIJySyBVcsw-hLpwPMd1MooYEeO6VjVxJvXRXQgQ6LNfDFyaLD2GfZIBwKZhIIl0JyhqBTYuhx3cpIvohBTOTmKAdBroCjBsdaxawxDqnYFG_RPeAR/s1600/eq1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="56" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgT6MT58eawzO1hKq97xiZHnN3EIhIJySyBVcsw-hLpwPMd1MooYEeO6VjVxJvXRXQgQ6LNfDFyaLD2GfZIBwKZhIIl0JyhqBTYuhx3cpIvohBTOTmKAdBroCjBsdaxawxDqnYFG_RPeAR/s320/eq1.png" width="320" /></a></div>
where v ≈ 246 GeV, and Q<span style="font-size: xx-small;">W</span> is the so-called <i>weak charge</i> of the proton. Such interactions arise thanks to the Z boson in the Standard Model being exchanged between electrons and quarks that make up the proton. At low energies, the exchange diagram is well approximated by the contact term above with Q<span style="font-size: xx-small;">W</span> = 0.0708 (somewhat smaller than the "natural" value Q<span style="font-size: xx-small;">W</span> ~ 1 due to numerical accidents making the Z boson effectively protophobic). The measured polarization asymmetry in electron-proton scattering can be re-interpreted as a determination of the proton weak charge: <b>Q<span style="font-size: xx-small;">W</span> = 0.0719 ± 0.0045,</b> in perfect agreement with the Standard Model prediction.<br />
<br />
New physics may affect the magnitude of the proton weak charge in two distinct ways. One is by altering the strength with which the Z boson couples to matter. This happens for example when light quarks mix with their heavier exotic cousins with different quantum numbers, as is often the case in the models from the Randall-Sundrum family. More generally, modified couplings to the Z boson could be a sign of quark compositeness. Another way is by generating new parity-violating contact interactions between electrons and quarks. This can be a result of yet unknown short-range forces which distinguish left- and right-handed electrons. Note that the observation of lepton flavor violation in B-meson decays can be interpreted as a hint for existence of such forces (although for that purpose the new force carriers do not need to couple to 1st generation quarks). Qweak's measurement puts novel limits on such broad scenarios. Whatever the origin, simple dimensional analysis allows one to estimate the possible change of the proton weak charge as <br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJmnzctshyphenhyphenMVDdXbj5O-ZJBQCQlwXovEQC-XyLT3XdWCAmYrlJryBSwwvmiVJjo4hEVP_9eM9rdw-jlo31CT_PIttIEwGo7gHT3QrBhTLcUd8yVycRxEu09LbBdOMufEZDzAWc37W-O-Bs/s1600/eq2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="74" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJmnzctshyphenhyphenMVDdXbj5O-ZJBQCQlwXovEQC-XyLT3XdWCAmYrlJryBSwwvmiVJjo4hEVP_9eM9rdw-jlo31CT_PIttIEwGo7gHT3QrBhTLcUd8yVycRxEu09LbBdOMufEZDzAWc37W-O-Bs/s320/eq2.png" width="320" /></a></div>
where M<span style="font-size: xx-small;">*</span> is the mass scale of new particles beyond the Standard Model, and g<span style="font-size: xx-small;">*</span> is their coupling strength to matter. Thus, Qweak can constrain new weakly coupled particles with masses up to a few TeV, or even 50 TeV particles if they are strongly coupled to matter (g<span style="font-size: xx-small;">*</span>~4π).<br />
<br />
What is the place of Qweak in the larger landscape of precision experiments? One can illustrate it by considering a simple example where heavy new physics modifies only the vector couplings of the Z boson to up and down quarks. The best existing constraints on such a scenario are displayed in this plot:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAddcEejXyeHmi5WfWz5gGR12y4zoNzDznVkhmzFxNJA_w8eBZabvXk_tE6rw3TxwuGrpWw7dGFnqvGRV0eA6pkEdUb-dmZRnYWYTFJmgBFdAG82jsrWmR8L_iQmBYg-HIBACH0VNPnR9l/s1600/LightQuarkCouplings.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAddcEejXyeHmi5WfWz5gGR12y4zoNzDznVkhmzFxNJA_w8eBZabvXk_tE6rw3TxwuGrpWw7dGFnqvGRV0eA6pkEdUb-dmZRnYWYTFJmgBFdAG82jsrWmR8L_iQmBYg-HIBACH0VNPnR9l/s400/LightQuarkCouplings.png" width="400" /></a></div>
From the size of the rotten egg region you see that the Z boson couplings to light quarks are currently known with a per-mille accuracy. Somewhat surprisingly, the LEP collider, which back in the 1990s produced tens of millions of Z boson to precisely study their couplings, is not at all the leader in this field. In fact, better constraints come from precision measurements at very low energies: <a href="http://arxiv.org/abs/1605.07114">pion, kaon, and neutron decays</a>, parity-violating transitions in <a href="http://science.sciencemag.org/content/275/5307/1759">cesium atoms</a>, and the latest Qweak results which make a difference too. The importance of Qweak is even more pronounced in more complex scenarios where the parameter space is multi-dimensional.<br />
<br />
Qweak is certainly not the last salvo on the low-energy frontier. Similar but more precise experiments are being <a href="http://arxiv.org/abs/1802.04759">prepared</a> as we read (I wish the follow up were called SuperQweak, or SQweak in short). Who knows, maybe quarks are made of more fundamental building blocks at the scale of ~100 TeV, and we'll first find it out thanks to parity violation at very low energies. Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-2846514233477399562.post-52553460266422615072018-05-07T23:47:00.000+01:002018-06-09T18:39:36.787+01:00Dark Matter goes sub-GeV It must have been great to be a particle physicist in the 1990s. Everything was simple and clear then. They knew that, at the most fundamental level, nature was described by one of the five superstring theories which, at low energies, reduced to the Minimal Supersymmetric Standard Model. Dark matter also had a firm place in this narrative, being identified with the lightest neutralino of the MSSM. This simple-minded picture strongly influenced the experimental program of dark matter detection, which was almost entirely focused on the so-called WIMPs in the 1 GeV - 1 TeV mass range. Most of the detectors, including the current leaders XENON and LUX, are blind to sub-GeV dark matter, as slow and light incoming particles are unable to transfer a detectable amount of energy to the target nuclei.<br />
<br />
Sometimes progress consists in realizing that you know nothing Jon Snow. The lack of new physics at the LHC invalidates most of the historical motivations for WIMPs. Theoretically, the mass of the dark matter particle could be anywhere between 10^-30 GeV and 10^19 GeV. There are myriads of models positioned anywhere in that range, and it's hard to argue with a straight face that any particular one is favored. We now know that we don't know what dark matter is, and that we should better search in many places. If anything, the small-scale problem of the 𝞚CDM cosmological model can be interpreted as a hint against the boring WIMPS and in favor of light dark matter. For example, if it turns out that dark matter has significant (nuclear size) self-interactions, that can only be realized with sub-GeV particles. <br />
<br />
It takes some time for experiment to catch up with theory, but the process is already well in motion. There is some fascinating progress on the front of ultra-light axion dark matter, which deserves a separate post. Here I want to highlight the ongoing developments in direct detection of dark matter particles with masses between MeV and GeV. Until recently, the only available constraint in that regime was obtained by recasting data from the XENON10 experiment - the grandfather of the currently operating XENON1T. In XENON detectors there are two ingredients of the signal generated when a target nucleus is struck: <i>ionization electrons </i>and <i>scintillation photons. </i>WIMP searches require both to discriminate signal from background. But MeV dark matter interacting with electrons could eject electrons from xenon atoms without producing scintillation. In the standard analysis, such events would be discarded as background. However, <a href="http://arxiv.org/abs/1206.2644">this paper</a> showed that, recycling the available XENON10 data on ionization-only events, one can exclude dark matter in the 100 MeV ballpark with the cross section for scattering on electrons larger than ~0.01 picobarn (10^-38 cm^2). This already has non-trivial consequences for concrete models; for example, a part of the parameter space of milli-charged dark matter is currently best constrained by XENON10. <br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxrNiDHvJrZaji5vH6gCWF5Anr_86syxywp8GNFpziXanZsJrDFA5PHkDVUfq3GETHjqK-Yy-e4X5ZsLqWpAPus4lkInTSrFR2wE-Z8MxP6AoK4koDCjp8DdVlVpD-Df53YwwhehTSnl5R/s1600/MeVdarkmatter.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="256" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxrNiDHvJrZaji5vH6gCWF5Anr_86syxywp8GNFpziXanZsJrDFA5PHkDVUfq3GETHjqK-Yy-e4X5ZsLqWpAPus4lkInTSrFR2wE-Z8MxP6AoK4koDCjp8DdVlVpD-Df53YwwhehTSnl5R/s400/MeVdarkmatter.png" width="400" /></a>It is remarkable that so much useful information can be extracted by basically misusing data collected for another purpose (earlier this year the DarkSide-50 <a href="http://arxiv.org/abs/1802.06998">recast</a> their own data in the same manner, excluding another chunk of the parameter space). Nevertheless, dedicated experiments will soon be taking over. Recently, two collaborations published first results from their prototype detectors: one is <a href="http://arxiv.org/abs/1804.00088">SENSEI</a>, which uses 0.1 gram of silicon CCDs, and the other is <a href="http://arxiv.org/abs/1804.10697">SuperCDMS</a>, which uses 1 gram of silicon semiconductor. Both are sensitive to eV energy depositions, thanks to which they can extend the search region to lower dark matter mass regions, and set novel limits in the virgin territory between 0.5 and 5 MeV. A compilation of the existing direct detection limits is shown in the plot. As you can see, above 5 MeV the tiny prototypes cannot yet beat the XENON10 recast. But that will certainly change as soon as full-blown detectors are constructed, after which the XENON10 sensitivity should be improved by several orders of magnitude.<br />
<br />
Should we be restless waiting for these results? Well, for any single experiment the chance of finding nothing are immensely larger than that of finding something. Nevertheless, the technical progress and the widening scope of searches offer some hope that the dark matter puzzle may be solved soon.Unknownnoreply@blogger.com13tag:blogger.com,1999:blog-2846514233477399562.post-70497689651849429402018-04-19T13:47:00.001+01:002018-06-08T09:35:33.008+01:00Massive Gravity, or You Only Live Twice<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaOc8D7AryVBLPKflFBFMXSGFaX3hSSdXWdH0fE0QacqPtTbgDSLHX5smm20Os3qNE9xed9L7_9sOn-pZni3n1Czfw8gTJO3XwCfBenBaOVirTQtcoj7TIEeHfyIvNK4fas06fMyemuWYv/s1600/You-Only-Live-Twice.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="138" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaOc8D7AryVBLPKflFBFMXSGFaX3hSSdXWdH0fE0QacqPtTbgDSLHX5smm20Os3qNE9xed9L7_9sOn-pZni3n1Czfw8gTJO3XwCfBenBaOVirTQtcoj7TIEeHfyIvNK4fas06fMyemuWYv/s320/You-Only-Live-Twice.jpg" width="320" /></a>Proving Einstein wrong is the ultimate ambition of every crackpot and physicist alike. In particular, Einstein's theory of gravitation - the general relativity - has been a victim of constant harassment. That is to say, it is trivial to modify gravity at large energies (short distances), for example by embedding it in string theory, but it is notoriously difficult to change its long distance behavior. At the same time, motivations to keep trying go beyond intellectual gymnastics. For example, the accelerated expansion of the universe may be a manifestation of modified gravity (rather than of a small cosmological constant). <br />
<br />
In Einstein's general relativity, gravitational interactions are mediated by a massless spin-2 particle - the so-called <i>graviton</i>. This is what gives it its hallmark properties: the long range and the universality. One obvious way to screw with Einstein is to add mass to the graviton, as entertained already in 1939 by Fierz and Pauli. The Particle Data Group <a href="http://pdglive.lbl.gov/Viewer.action">quotes</a> the constraint <i>m ≤</i> 6*10^−32 eV, so we are talking about the De Broglie wavelength comparable to the size of the observable universe. Yet even that teeny mass may cause massive troubles. In 1970 the Fierz-Pauli theory was killed by the van Dam-Veltman-Zakharov (vDVZ) discontinuity. The problem stems from the fact that a massive spin-2 particle has 5 polarization states (0,±1,±2) unlike a massless one which has only two (±2). It turns out that the polarization-0 state couples to matter with the similar strength as the usual polarization ±2 modes, even in the limit where the mass goes to zero, and thus mediates an additional force which differs from the usual gravity. One finds that, in massive gravity, light bending would be 25% smaller, in conflict with the very precise observations of stars' deflection around the Sun. vDV concluded that "the graviton has rigorously zero mass". Dead for the first time... <br />
<br />
The second coming was heralded soon after by Vainshtein, who noticed that the troublesome polarization-0 mode can be shut off in the proximity of stars and planets. This can happen in the presence of graviton self-interactions of a certain type. Technically, what happens is that the polarization-0 mode develops a background value around massive sources which, through the derivative self-interactions, renormalizes its kinetic term and effectively diminishes its interaction strength with matter. See <a href="http://arxiv.org/abs/1304.7240">here</a> for a nice review and more technical details. Thanks to the Vainshtein mechanism, the usual predictions of general relativity are recovered around large massive source, which is exactly where we can best measure gravitational effects. The possible self-interactions leading a healthy theory without ghosts have been <a href="http://arxiv.org/abs/1011.1232">classified</a>, and go under the name of the dRGT massive gravity.<br />
<br />
There is however one inevitable consequence of the Vainshtein mechanism. The graviton self-interaction strength grows with energy, and at some point becomes inconsistent with the unitarity limits that every quantum theory should obey. This means that massive gravity is necessarily an effective theory with a limited validity range and has to be replaced by a more fundamental theory at some cutoff scale 𝞚. This is of course nothing new for gravity: the usual Einstein gravity is also an effective theory valid at most up to the Planck scale M<span style="font-size: xx-small;">Pl</span>~10^19 GeV. But for massive gravity the cutoff depends on the graviton mass and is much smaller for realistic theories. At best,<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2nOw1ePTDz-QdWOMp_wHjesl3fBNVKrtitoVGZ5hlf6-cBTWa3oSSy02Au5knVVAxawAlqZHg_mGyrqWOZRT8OM8f7CYX3O78D9JGeseoJSWSAZGYD5koWRuhiixSBBNkgKJ_n74V-ito/s1600/massivegravitycutoff.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="35" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2nOw1ePTDz-QdWOMp_wHjesl3fBNVKrtitoVGZ5hlf6-cBTWa3oSSy02Au5knVVAxawAlqZHg_mGyrqWOZRT8OM8f7CYX3O78D9JGeseoJSWSAZGYD5koWRuhiixSBBNkgKJ_n74V-ito/s640/massivegravitycutoff.png" width="640" /></a></div>
So the massive gravity theory in its usual form cannot be used at distance scales shorter than ~300 km. For particle physicists that would be a disaster, but for cosmologists this is fine, as one can still predict the behavior of galaxies, stars, and planets. While the theory certainly cannot be used to describe the results of table top experiments, it is relevant for the movement of celestial bodies in the Solar System. Indeed, lunar laser ranging experiments or precision studies of Jupiter's orbit are interesting <a href="http://arxiv.org/abs/1606.08462">probes</a> of the graviton mass.<br />
<br />
Now comes the latest twist in the story. Some time ago <a href="http://arxiv.org/abs/hep-th/0602178">this paper</a> showed that not everything is allowed in effective theories. Assuming the full theory is unitary, causal and local implies non-trivial constraints on the possible interactions in the low-energy effective theory. These techniques are suitable to constrain, via dispersion relations, derivative interactions of the kind required by the Vainshtein mechanism. Applying them to the dRGT gravity one <a href="http://arxiv.org/abs/1710.02539">finds</a> that it is inconsistent to assume the theory is valid all the way up to 𝞚<span style="font-size: x-small;">max</span>. Instead, it must be replaced by a more fundamental theory already at a much lower cutoff scale, parameterized as 𝞚 = g*^1/3 𝞚<span style="font-size: x-small;">max</span> (the parameter g* is interpreted as the coupling strength of the more fundamental theory). The allowed parameter space in the g*-m plane is showed in this plot:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0S5KRXjdwjhfxowCAw1zJdxc2RhaMOqIU0SdBlUb2uLmdzfkywscJpzyx-BfOJGO2vaOYp_-w1i9rm8-X7wEvO175nHEq2uJ0LU_nug-qRZXtLFov8jWKF28nTkzaRIRI9kEtFrWoWhk5/s1600/massivegravity.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="241" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0S5KRXjdwjhfxowCAw1zJdxc2RhaMOqIU0SdBlUb2uLmdzfkywscJpzyx-BfOJGO2vaOYp_-w1i9rm8-X7wEvO175nHEq2uJ0LU_nug-qRZXtLFov8jWKF28nTkzaRIRI9kEtFrWoWhk5/s400/massivegravity.png" width="400" /></a></div>
<br />
Massive gravity must live in the lower left corner, outside the gray area excluded theoretically and where the graviton mass satisfies the experimental upper limit <i>m</i>~10^−32 eV. This implies g* ≼ 10^-10, and thus the validity range of the theory is some 3 order of magnitude lower than 𝞚max. In other words, massive gravity is not a consistent effective theory at distance scales below ~1 million km, and thus cannot be used to describe the motion of falling apples, GPS satellites or even the Moon. In this sense, it's not much of a competition to, say, Newton. Dead for the second time. <br />
<br />
Is this the end of the story? For the third coming we would need a more general theory with additional light particles beyond the massive graviton, which is consistent theoretically in a larger energy range, realizes the Vainshtein mechanism, and is in agreement with the current experimental observations. This is hard but not impossible to imagine. Whatever the outcome, what I like in this story is the role of theory in driving the progress, which is rarely seen these days. In the process, we have understood a lot of interesting physics whose relevance goes well beyond one specific theory. So the trip was certainly worth it, even if we find ourselves back at the departure point.Unknownnoreply@blogger.com27tag:blogger.com,1999:blog-2846514233477399562.post-32505817941257652992018-04-09T21:18:00.000+01:002018-05-09T20:31:38.879+01:00Per kaons ad astra NA62 is a precision experiment at CERN. From their name you wouldn't suspect that they're doing anything noteworthy: the collaboration was running in the contest for the most unimaginative name, only narrowly losing to CMS... NA62 employs an intense beam of charged kaons to search for the very rare decay <i>K</i>+ → 𝝿+ 𝜈 𝜈. The Standard Model predicts the branching fraction BR(<i>K</i>+ → 𝝿+ 𝜈 𝜈) = 8.4x10^-11 with a small, 10% theoretical uncertainty (precious stuff in the flavor business). The previous <a href="http://arxiv.org/abs/arXiv:0903.0030">measurement</a> by the BNL-E949 experiment reported BR(<i>K</i>+ → 𝝿+ 𝜈 𝜈) = (1.7 ± 1.1)x10^-10, consistent with the Standard Model, but still leaving room for large deviations. NA62 is expected to pinpoint the decay and measure the branching fraction with a 10% accuracy, thus severely constraining new physics contributions. The wires, pipes, and gory details of the analysis were nicely <a href="http://www.science20.com/tommaso_dorigo/na62_places_bid_for_future_observation_of_superrare_kaon_decay-231426">summarized</a> by Tommaso. Let me jump directly to explaining what is it good for from the theory point of view.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmL9DcpeSlDgDLM2Zp_C2N6_in1MMyI2kReFF-3a0eAuwYuwCmD3bEAlmJKIZXfVGjayhL5Oa-F_dYwdwz832OJvqEJQgmjJrLmpXdPo1qpsY9_1kMXG0BhF7ixPnwOWy_zvcjdNT7N7l2/s1600/na62_1.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="56" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmL9DcpeSlDgDLM2Zp_C2N6_in1MMyI2kReFF-3a0eAuwYuwCmD3bEAlmJKIZXfVGjayhL5Oa-F_dYwdwz832OJvqEJQgmjJrLmpXdPo1qpsY9_1kMXG0BhF7ixPnwOWy_zvcjdNT7N7l2/s200/na62_1.jpg" width="200" /></a>To this end it is useful to adopt the effective theory perspective. At a more fundamental level, the decay occurs due to the strange quark inside the kaon undergoing the transformation <i>sbar</i> → <i>dbar</i> 𝜈 𝜈<i>bar</i>. In the Standard Model, the amplitude for that process is dominated by one-loop diagrams with W/Z bosons and heavy quarks. But kaons live at low energies and do not really see the fine details of the loop amplitude. Instead, they effectively see the 4-fermion contact interaction:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKc0l8dwu_DrgVGDfPBlYrFpsuhdBe6SZA0z8WY7FtJHonXHu7pKg20nDxXAu18_hS_S7dXnezuWggRz32fvDHHZ3_ETwO7H3hP_EzUoqb4iHUdFa_w6X5g1VBG7wET80OpgTMdbYHSOfZ/s1600/Leffkaon.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="40" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKc0l8dwu_DrgVGDfPBlYrFpsuhdBe6SZA0z8WY7FtJHonXHu7pKg20nDxXAu18_hS_S7dXnezuWggRz32fvDHHZ3_ETwO7H3hP_EzUoqb4iHUdFa_w6X5g1VBG7wET80OpgTMdbYHSOfZ/s320/Leffkaon.png" width="320" /></a></div>
The mass scale suppressing this interaction is quite large, more than 1000 times larger than the W boson mass, which is due to the loop factor and small CKM matrix elements entering the amplitude. The strong suppression is the reason why the <i>K</i>+ → 𝝿+ 𝜈 𝜈 decay is so rare in the first place. The corollary is that even a small new physics effect inducing that effective interaction may dramatically change the branching fraction. Even a particle with a mass as large as 1 PeV coupled to the quarks and leptons with order one strength could produce an observable shift of the decay rate. In this sense, NA62 is a microscope probing physics down to 10^-20 cm distances, or up to PeV energies, well beyond the reach of the LHC or other colliders in this century. If the new particle is lighter, say order TeV mass, NA62 can be sensitive to a tiny milli-coupling of that particle to quarks and leptons.<br />
<br />
So, from a model-independent perspective, the advantages of studying the <i>K</i>+ → 𝝿+ 𝜈 𝜈 decay are quite clear. A less trivial question is what can the future NA62 measurements teach us about our cherished models of new physics. One interesting application is in the industry of explaining the apparent violation of lepton flavor universality in <i>B</i> → <i>K</i> <i>l</i>+ <i>l</i>-, and <i>B</i> → <i>D l </i>𝜈 decays. Those anomalies involve the 3rd generation bottom quark, thus a priori they do not need to have anything to do with kaon decays. However, many of the existing models introduce flavor symmetries controlling the couplings of the new particles to matter (instead of just ad-hoc interactions to address the anomalies). The flavor symmetries may then relate the couplings of different quark generations, and thus predict correlations between new physics contributions to B meson and to kaon decays. One nice <a href="http://arxiv.org/abs/1705.10729">example</a> is illustrated in this plot:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgx_Q6vetRFda8iY1_mZhAP4QJuNbcj_DvMht3MInytuQXi5YOAPPQcQHS7pMxSIyS13gCOqb-YpcX71sghlIObplQLhCj9dwvcjnTpYrO_nVQBh2Rl_ZJwIM70wP6pA1efIG5A2Q79dXbA/s1600/RDvsKpinunu.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgx_Q6vetRFda8iY1_mZhAP4QJuNbcj_DvMht3MInytuQXi5YOAPPQcQHS7pMxSIyS13gCOqb-YpcX71sghlIObplQLhCj9dwvcjnTpYrO_nVQBh2Rl_ZJwIM70wP6pA1efIG5A2Q79dXbA/s320/RDvsKpinunu.png" width="320" /></a></div>
<br />
The observable R<span style="font-size: xx-small;">D(*)</span> parametrizes the preference for <i>B</i> → <i>D</i> 𝜏 𝜈 over similar decays with electrons and muon, and its measurement by the BaBar collaboration deviates from the Standard Model prediction by roughly 3 sigma. The plot shows that, in a model based on U(2)xU(2) flavor symmetry, a significant contribution to R<span style="font-size: xx-small;">D(*)</span> generically implies a large enhancement of BR(<i>K</i>+ → 𝝿+ 𝜈 𝜈), unless the model parameters are tuned to avoid that. The anomalies in the <i>B</i> → <i>K</i>(*) 𝜇 𝜇 decays can also be correlated with large effects in <i>K</i>+ → 𝝿+ 𝜈 𝜈, see <a href="http://arxiv.org/abs/1802.00786">here</a> for an example. Finally, in the presence of new light invisible particles, such as axions, the NA62 observations can be polluted by exotic decay channels, such as <a href="http://arxiv.org/abs/1612.08040">e.g.</a> <i>K</i>+ →<i> axion</i> 𝝿+.<br />
<br />
The <i>K</i>+ → 𝝿+ 𝜈 𝜈 decay is by no means the magic bullet that will inevitably break the Standard Model. It should be seen as one piece of a larger puzzle that may or may not provide crucial hints about new physics. For the moment, NA62 has <a href="https://indico.in2p3.fr/event/16579/contributions/60808/attachments/47182/59257/Moriond_rmarchev.pdf">analyzed</a> only a small batch of data collected in 2016, and their error bars are still larger than those of BNL-E949. That should change soon when the 2017 dataset is analyzed. More data will be acquired this year, with 20 signal events expected before the long LHC shutdown. Simultaneously, another experiment called <a href="https://arxiv.org/abs/1609.03637">KOTO</a> studies an even more rare process where neutral kaons undergo the CP-violating decay <i>K<span style="font-size: xx-small;">L</span></i> → 𝝿<span style="font-size: xx-small;">0</span> 𝜈 𝜈, which probes the imaginary part of the effective operator written above. As I <a href="http://resonaances.blogspot.fr/2018/03/where-were-we.html">wrote</a> recently, my feeling is that low-energy precision experiments are currently our best hope for a better understanding of fundamental interactions, and I'm glad to see a good pace of progress on this front.Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-2846514233477399562.post-89536916087725376972018-04-01T01:55:00.000+01:002018-05-09T20:31:26.192+01:00Singularity is now<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2p_pySWouSYrD4G7ATyxdTAaMi86sbMoHu_4y6RzJcl7ajoRqUeJW9KDrWAXhlDxZF_yWqjk7FDlSJo05c2Do0XUCfuw13yvF5iNA0YoVakqIU12uKgJFui4CgqTktdY7_0Hm7SsIS-Fn/s1600/hal.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2p_pySWouSYrD4G7ATyxdTAaMi86sbMoHu_4y6RzJcl7ajoRqUeJW9KDrWAXhlDxZF_yWqjk7FDlSJo05c2Do0XUCfuw13yvF5iNA0YoVakqIU12uKgJFui4CgqTktdY7_0Hm7SsIS-Fn/s400/hal.jpg" width="400" /></a>Artificial intelligence (AI) is entering into our lives. It's been 20 years now since the watershed moment of Deep Blue versus Garry Kasparov. Today, people study the games of AlphaGo against itself to get a glimpse of what a superior intelligence would be like. But at the same time AI is getting better in copying human behavior. Many Apple users have got emotionally attached to Siri. Computers have not only learnt to drive cars, but also not to slow down when a pedestrian is crossing the road. The progress is very well visible to the bloggers community. Bots commenting under my posts have evolved well past !!!buy!!!viagra!!!cialis!!!hot!!!naked!!! sort of thing. Now they refer to the topic of the post, drop an informed comment, an interesting remark, or a relevant question, before pasting a link to a revenge porn website. Sometimes it's really a pity to delete those comments, as they can be more to-the-point than those written by human readers. <br />
<br />
AI is also entering the field of science at an accelerated pace, and particle physics is as usual in the avant-garde. It's not a secret that physics analyses for the LHC papers (even if finally signed by 1000s of humans) are in reality performed by <i>neural networks</i>, which are just beefed up versions of Alexa developed at CERN. The hottest topic in high-energy physics experiment is now machine learning, where computers teach humans the optimal way of clustering jets, or telling quarks from gluons. The question is <b>when</b>, not if, AI will become sophisticated enough to perform a creative work of theoreticians. <br />
<br />
It seems that the answer is <b>now</b>.<br />
<br />
Some of you might have noticed a certain Alan Irvine, affiliated with the Los Alamos National Laboratory, regularly posting on arXiv single-author theoretical papers on fashionable topics such as the ATLAS diphoton excess, LHCb B-meson anomalies, DAMPE spectral feature, etc. Many of us have received emails from this author requesting citations. Recently I got one myself; it seemed overly polite, but otherwise it didn't differ in relevance or substance from other similar requests. During the last two and half years, A. Irvine has accumulated a decent h-factor of 18. His papers have been submitted to prestigious journals in the field, such as the PRL, JHEP, or PRD, and some of them were even accepted after revisions. The scandal broke out a week ago when a JHEP editor noticed that the extensive revision, together with a long cover letter, was submitted within 10 seconds from receiving the referee's comments. Upon investigation, it turned out that A. Irvine never worked in Los Alamos, nobody in the field has ever met him in person, and the IP from which the paper was submitted was that of the well-known Ragnarok Thor server. A closer analysis of his past papers showed that, although linguistically and logically correct, they were merely a compilation of equations and text from the previous literature without any original addition. <br />
<br />
Incidentally, arXiv administrators have been aware that, since a few years, all source files in daily hep-ph listings were downloaded for an unknown purpose by automated bots. When you have excluded the impossible, whatever remains, however improbable, must be the truth. There is no doubt that A. Irvine is an AI bot, that was trained on the real hep-ph input to produce genuinely-looking particle theory papers. <br />
<br />
The works of A. Irvine have been quietly removed from arXiv and journals, but difficult questions remain. What was the purpose of it? Was it a spoof? A parody? A social experiment? A Facebook research project? A Russian provocation? And how could it pass unnoticed for so long within the theoretical particle community? What's most troubling is that, if there was one, there can easily be more. Which other papers on arXiv are written by AI? How can we recognize them? Should we even try, or maybe the dam is already broken and we have to accept the inevitable? Is Résonaances written by a real person? How can you be sure that you are real?<br />
<br />
<i>Update: obviously, this post is an April Fools' prank. It is absolutely unthinkable that the creative process of writing modern particle theory papers can ever be automatized. Also, the neural network referred to in the LHC papers is nothing like Alexa; it's simply a codename for PhD students. Finally, I assure you that </i><i>Résonaances is written by a hum </i><span style="background-color: #f8f9fa; font-family: monospace , monospace; font-size: 14px; white-space: pre-wrap;">00105e0 e6b0 343b 9c74 0804 e7bc 0804 e7d5 0804</span><span style="background-color: white; color: #222222; font-family: sans-serif; font-size: 14px;"> </span>[core dump]Unknownnoreply@blogger.com12tag:blogger.com,1999:blog-2846514233477399562.post-90088354705916884552018-03-21T16:19:00.000+01:002018-05-09T20:30:58.544+01:0021cm to dark matterThe EDGES <a href="https://www.nature.com/articles/nature25792">discovery</a> of the 21cm absorption line at the cosmic dawn has been widely discussed on <a href="https://the-educational-blog.quora.com/The-EDGES-Experiment-This-May-Deserve-Two-Nobel-Prizes">blogs</a> and in popular <a href="https://www.theguardian.com/science/2018/feb/28/cosmic-dawn-astronomers-detect-signals-from-first-stars-in-the-universe">press</a>. Quite deservedly so. The observation opens a new window on the epoch when the universe as we know it was just beginning. We expect a treasure trove of information about the standard processes happening in the early universe, as well as novel constraints on hypothetical particles that might have been present then. It is not a very long shot to speculate that, if confirmed, the EDGES discovery will be awarded a Nobel prize. On the other hand, the bold claim bundled with their experimental result - that the unexpectedly large strength of the signal is an indication of interaction between the ordinary matter and cold dark matter - is very controversial. <br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUBZOE-T9bPTDdaVQ2_3s4VotyuAsOGSRS67RLfvODtmCl8fTm5RJyVIzCriwi2vytROEGlZbgdtyej1TB4Ge5UlFABRDFelepiVineOob3aaXHoXKZ74Q1Sr3XOblPAJOnC8iUOo4HFCF/s1600/darkages.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="287" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUBZOE-T9bPTDdaVQ2_3s4VotyuAsOGSRS67RLfvODtmCl8fTm5RJyVIzCriwi2vytROEGlZbgdtyej1TB4Ge5UlFABRDFelepiVineOob3aaXHoXKZ74Q1Sr3XOblPAJOnC8iUOo4HFCF/s400/darkages.png" width="400" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUBZOE-T9bPTDdaVQ2_3s4VotyuAsOGSRS67RLfvODtmCl8fTm5RJyVIzCriwi2vytROEGlZbgdtyej1TB4Ge5UlFABRDFelepiVineOob3aaXHoXKZ74Q1Sr3XOblPAJOnC8iUOo4HFCF/s1600/darkages.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><br /></a>But before jumping to dark matter it is worth reviewing the standard physics leading to the EDGES signal. In the lowest energy (singlet) state, hydrogen may absorb a photon and jump to a slightly excited (triplet) state which differs from the true ground state just by the arrangement of the proton and electron spins. Such transitions are induced by photons of wavelength of 21cm, or frequency of 1.4 GHz, or energy of 5.9 𝜇eV, and they may routinely occur at the cosmic dawn when Cosmic Microwave Background (CMB) photons of the right energy hit neutral hydrogen atoms hovering in the universe. The evolution of the CMB and hydrogen temperatures is shown in the picture here as a function of the cosmological redshift z (large z is early time, z=0 is today). The CMB temperature is red and it decreases with time as (1+z) due to the expansion of the universe. The hydrogen temperature in blue is a bit more tricky. At the recombination time around z=1100 most proton and electrons combine to form neutral atoms, however a small fraction of free electrons and protons survives. Interactions between the electrons and CMB photons via Compton scattering are strong enough to keep the two (and consequently the hydrogen as well) at equal temperatures for some time. However, around z=200 the CMB and hydrogen temperatures decouple, and the latter subsequently decreases much faster with time, as (1+z)^2. At the cosmic dawn, z~17, the hydrogen gas is already 7 times colder than the CMB, after which light from the first stars heats it up and ionizes it again.<br />
<br />
The quantity directly relevant for the 21cm absorption signal is the so-called spin temperature Ts, which is a measure of the relative occupation number of the singlet and triplet hydrogen states. Just before the cosmic dawn, the spin temperature equals the CMB one, and as a result there is no net absorption or emission of 21cm photons. However, it is believed that the light from the first stars initially lowers the spin temperature down to the hydrogen one. Therefore, there should be absorption of 21cm CMB photons by the hydrogen in the epoch between z~20 and z~15. After taking into account the cosmological redshift, one should now observe a dip in the radio frequencies between 70 and 90 MHz. This is roughly what EDGES finds. The depth of the dip is described by the formula:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7Z5Ol0V5fY9TswVbUpM_THKaBUzmIQJG0kXf0bxC9CMCnT8OR5BAZKk7BWqmBP6dHzkHGSEt61W9gW7zv7m6iHB5_IBTWZeJQXORGS9bnlxfoSR1EX25j4m5HlcAV63wGIcBkdS4HsKcT/s1600/Eq2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="56" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7Z5Ol0V5fY9TswVbUpM_THKaBUzmIQJG0kXf0bxC9CMCnT8OR5BAZKk7BWqmBP6dHzkHGSEt61W9gW7zv7m6iHB5_IBTWZeJQXORGS9bnlxfoSR1EX25j4m5HlcAV63wGIcBkdS4HsKcT/s320/Eq2.png" width="320" /></a></div>
As the spin temperature cannot be lower than that of the hydrogen, the standard physics predicts TCMB/Ts ≼ 7 corresponding T21 ≽ -0.2K. The surprise is that EDGES observes a larger dip, T21 ≈ -0.5K, 3.8 astrosigma away from the predicted value, as if TCMB/Ts were of order 15.<br />
<br />
If the EDGES result is taken at face value, it means that TCMB/Ts at the cosmic dawn was much larger than predicted in the standard scenario. Either there was a lot more photon radiation at the relevant wavelengths, or the hydrogen gas was much colder than predicted. Focusing on the latter possibility, one could imagine that the hydrogen was cooled due to interactions with cold dark matter made of relatively light (less than GeV) particles. However, this idea very difficult to realize in practice, because it requires the interaction cross section to be thousands of barns at the relevant epoch! Not picobarns typical for WIMPs. Many orders of magnitude more than the total proton-proton cross section at the LHC. Even in nuclear processes such values are rarely seen. And we are talking here about dark matter, whose trademark is interacting weakly. Obviously, the idea is running into all sorts of constraints that have been laboriously accumulated over the years. <br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVWXCKChcdgXjuYr4oKjTGoIaVhHavApmah9VCheh7Xr_Q70GmNu2g8DHjFS9loIAd_uLRdqeIjgJWwY1lBej4RPJRfqk3A9eaVOTEUgO1oDsjij5FjQV_5etvAH4bKwrpfYq46o1A379m/s1600/Millicharge.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="397" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVWXCKChcdgXjuYr4oKjTGoIaVhHavApmah9VCheh7Xr_Q70GmNu2g8DHjFS9loIAd_uLRdqeIjgJWwY1lBej4RPJRfqk3A9eaVOTEUgO1oDsjij5FjQV_5etvAH4bKwrpfYq46o1A379m/s400/Millicharge.png" width="400" /></a>One can try to save this idea by a series of evasive tricks. If the interaction cross section scales as 1/v^4, where v is the relative velocity between colliding matter and dark matter particles, it could be enhanced at the cosmic dawn when the typical velocities were at its minimum. The 1/v^4 behavior is not unfamiliar, as it is characteristic of the electromagnetic forces in the non-relativistic limit. Thus, one could envisage a model where dark matter has a minuscule electric charge, one thousandth or less that of the proton. This trick buys some mileage, but the obstacles remain enormous. The cross section is still large enough for the dark and ordinary matter to couple strongly during the recombination epoch, contrary to what is concluded from precision observations of the CMB. Therefore the milli-charge particles can constitute only a small fraction of dark matter, less then 1 percent. Finally, one needs to avoid constraints from direct detection, colliders, and emission by stars and supernovae. A plot borrowed from <a href="http://arxiv.org/abs/1803.03091">this paper</a> shows that a tiny region of viable parameter space remains around 100 MeV mass and 10^-5 charge, though my guess is that this will also go away upon a more careful analysis.<br />
<br />
So, milli-charge dark matter cooling hydrogen does not stand scrutiny as an explanation for the EDGES anomaly. This does not mean that all exotic explanations must be so implausible. Better models are being and will be proposed, and one of them could even be correct. For example, <a href="http://arxiv.org/abs/1803.07048">models</a> where new particles lead to an injection of additional 21cm photons at early times seem to be more encouraging. My bet? Future observations will confirm the 21cm absorption signal, but the amplitude and other features will turn out to be consistent with the standard 𝞚CDM predictions. Given the number of competing experiments in the starting blocks, the issue should be clarified within the next few years. What is certain is that, this time, we will learn a lot whether or not the anomalous signal persists :)Unknownnoreply@blogger.com12tag:blogger.com,1999:blog-2846514233477399562.post-38851349273784030832018-03-14T19:47:00.000+01:002018-05-09T20:31:11.143+01:00Where were we?Last time this blog was active, particle physics was entering a sharp curve. That the infamous 750 GeV resonance had petered out was not a big deal in itself - one expects these things to happen every now and then. But the lack of any new physics at the LHC when it had already collected a significant chunk of data was a reason to worry. We know that we don't know everything yet about the fundamental interactions, and that there is a deeper layer of reality that needs to be uncovered (at least to explain dark matter, neutrino masses, baryogenesis, inflation, and physics at energies above the Planck scale). For a hundred years, increasing the energy of particle collisions has been the best way to increase our understanding of the basic constituents of nature. However, with nothing at the LHC and the next higher energy collider decades away, a feeling was growing that the progress might stall.<br />
<br />
In this respect, nothing much has changed during the time when the blog was dormant, except that these sentiments are now firmly established. Crisis is no longer a whispered word, but it's openly discussed in corridors, on <a href="http://backreaction.blogspot.fr/2018/03/the-multiworse-is-coming.html?spref=tw">blogs</a>, on <a href="https://arxiv.org/abs/1710.07663">arXiv</a>, and in <a href="https://www.economist.com/news/science-and-technology/21734379-no-guts-no-glory-fundamental-physics-frustrating-physicists">color magazines</a>. The clear message from the LHC is that the dominant paradigms about the physics at the weak scale were completely misguided. The Standard Model seems to be a perfect effective theory at least up to a few TeV, and there is no indication at what energy scale new particles have to show up. While everyone goes through the five stages of grief at their own pace, my impression is that most are already well past the denial. The open question is what should be the next steps to make sure that exploration of fundamental interactions will not halt. <br />
<br />
One possible reaction to a crisis is <i>more of the same</i>. Historically, such an approach has often been efficient, for example it worked for a long time in the case of the Soviet economy. In our case one could easily go on with more models, more epicycles, more parameter space, more speculations. But the driving force for all these SusyWarpedCompositeStringBlackHairyHole enterprise has always been the (small but still) possibility of being vindicated by the LHC. Without serious prospects of experimental verification, model building is reduced to intellectual gymnastics that can hardly stir imagination. Thus the business-as-usual is not an option in the long run: it couldn't elicit any enthusiasm among the physicists or the public, it wouldn't attract new bright students, and thus it would be a straight path to irrelevance.<br />
<br />
So, particle physics has to change. On the experimental side we will inevitably see, just for economical reasons, less focus on high-energy colliders and more on smaller experiments. Theoretical particle physics will also have to evolve to remain relevant. Certainly, the emphasis needs to be shifted away from empty speculations in favor of more solid research. I don't pretend to know all the answers or have a clear vision of the optimal strategy, but I see <i>three</i> promising directions.<br />
<br />
One is astrophysics where there are much better prospects of experimental progress. The cosmos is a natural collider that is constantly testing fundamental interactions independently of current fashions or funding agencies. This gives us an opportunity to learn more about dark matter and neutrinos, and also about various hypothetical particles like axions or milli-charged matter. The most recent story of the 21cm absorption signal shows that there are still treasure troves of data waiting for us out there. Moreover, new observational windows keep opening up, as recently illustrated by the nascent gravitational wave astronomy. This avenue is of course a non-brainer, already explored since a long time by particle theorists, but I expect it will further gain in importance in the coming years. <br />
<br />
Another direction is precision physics. This, also, has been an integral part of particle physics research for quite some time, but it should grow in relevance. The point is that one can probe very heavy particles, often beyond the reach of present colliders, by precisely measuring low-energy observables. In the most spectacular example, studying proton decay may give insight into new particles with masses of order 10^16 GeV - unlikely to be ever attainable directly. There is a whole array of observables that can probe new physics well beyond the direct LHC reach: a myriad of rare flavor processes, electric dipole moments of the electron and neutron, atomic parity violation, neutrino scattering, and so on. This road may be long and tedious but it is bound to succeed: at some point some experiment somewhere must observe a phenomenon that does not fit into the Standard Model. If we're very lucky, it may be that the anomalies currently observed by the LHCb in certain rare B-meson decays are already the first harbingers of a breakdown of the Standard Model at higher energies.<br />
<br />
Finally, I should mention formal theoretical developments. The naturalness problem of the cosmological constant and of the Higgs mass may suggest some fundamental misunderstanding of quantum field theory on our part. Perhaps this should not be too surprising. In many ways we have reached an amazing proficiency in QFT when applied to certain precision observables or even to LHC processes. Yet at the same time QFT is often used and taught in the same way as magic in Hogwarts: mechanically, blindly following prescriptions from old dusty books, without a deeper understanding of the sense and meaning. Recent years have seen a brisk development of alternative approaches: a revival of the old S-matrix techniques, new amplitude calculation methods based on recursion relations, but also complete reformulations of the QFT basics demoting the sacred cows like fields, Lagrangians, and gauge symmetry. Theory alone rarely leads to progress, but it may help to make more sense of the data we already have. Could better understanding or complete reformulating of QFT bring new answers to the old questions? I think that is not impossible. <br />
<br />
All in all, there are good reasons to worry, but also tons of new data in store and lots of fascinating questions to answer. How will the B-meson anomalies pan out? What shall we do after we hit the neutrino floor? Will the 21cm observations allow us to understand what dark matter is? Will China build a 100 TeV collider? Or maybe a radio telescope on the Moon instead? Are experimentalists still needed now that we have machine learning? How will physics change with the centre of gravity moving to Asia? I will tell you my take on such and other questions and highlight old and new ideas that could help us understand the nature better. Let's see how far I'll get this time ;)Unknownnoreply@blogger.com34tag:blogger.com,1999:blog-2846514233477399562.post-26691086082940394602016-09-11T12:48:00.000+01:002018-03-15T12:20:49.415+01:00Weekend Plot: update on WIMPsThere's been a lot of discussion on this blog about the LHC not finding new physics. I should however give justice to other experiments that also don't find new physics, often in a spectacular way. One area where this is happening is direct detection of WIMP dark matter. This weekend plot summarizes the current limits on the spin-independent scattering cross-section of dark matter particles on nucleons:<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtAxjF6wqlGd-ozzww6ca0Og35mbL5-EL9Aw5Vk5uuSuOllzKYkivR0Sk4ha6g8kcd69XBfZwYRhY2spj0KcJhm7X2uqH4gzUN6J0lCpm6MRS0CFMOc8KC-lHXWQn8Y0MHIbFN4-ntqVyn/s1600/DirectDetection.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtAxjF6wqlGd-ozzww6ca0Og35mbL5-EL9Aw5Vk5uuSuOllzKYkivR0Sk4ha6g8kcd69XBfZwYRhY2spj0KcJhm7X2uqH4gzUN6J0lCpm6MRS0CFMOc8KC-lHXWQn8Y0MHIbFN4-ntqVyn/s640/DirectDetection.png" width="640" /></a></div>
For large WIMP masses, currently the most succesful detection technology is to fill up a tank with a ton of liquid xenon and wait for a passing dark matter particle to knock one of the nuclei. Recently, we have had updates from two such experiments: <a href="https://arxiv.org/abs/1608.07648">LUX</a> in the US, and <a href="http://arxiv.org/abs/1607.07400">PandaX</a> in China, whose limits now cut below zeptobarn cross sections (1 zb = 10^-9 pb = 10^-45 cm^2). These two experiments are currently going head-to-head, but Panda, being larger, will ultimately overtake LUX. Soon, however, it'll have to face a new fierce competitor: the XENON1T experiment, and the plot will have to be updated next year. Fortunately, we won't need to be learning another prefix soon. Once yoctobarn sensitivity is achieved by the experiments, we will hit the neutrino floor: the non-reducible background from solar and atmospheric neutrinos (gray area at the bottom of the plot). This will make detecting a dark matter signal much more challenging, and will certainly slow down the progress for WIMP masses larger than ~5 GeV. For lower masses, the distance to the floor remains large. Xenon detectors lose their steam there, and another technology is needed, like germanium detectors of <a href="http://arxiv.org/abs/1509.02448">CDMS</a> and CDEX, or CaWO4 crystals of <a href="http://arxiv.org/abs/1509.01515">CRESST</a>. Also on this front important progress is expected soon.<br />
<br />
What does the theory say about when we will find dark matter? It is perfectly viable that the discovery is waiting for us just behind the corner in the remaining space above the neutrino floor, but currently there's no strong theoretical hints in favor of that possibility. Usually, dark matter experiments advertise that they're just beginning to explore the interesting parameter space predicted by theory models.This is not quite correct. If the WIMP were true to its name, that is to say if it was interacting via the weak force (meaning, coupled to Z with order 1 strength), it would have order 10 fb scattering cross section on neutrons. Unfortunately, that natural possibility was excluded in the previous century. Years of experimental progress have shown that the WIMPs, if they exist, must be interacting<i> super-weakly</i> with matter. For example, for a 100 GeV fermionic dark matter with the vector coupling <i>g</i> to the Z boson, the current limits imply g ≲ 10^-4. The coupling can be larger if the Higgs boson is the mediator of interactions between the dark and visible worlds, as the Higgs already couples very weakly to nucleons. This construction is, arguably, the most plausible one currently probed by direct detection experiments. For a scalar dark matter particle <i>X</i> with mass 0.1-1 TeV coupled to the Higgs via the interaction λ v <i>h </i>|<i>X</i>|^2 the experiments are currently probing the coupling λ in the 0.01-1 ballpark. In general, there's no theoretical lower limit on the dark matter coupling to nucleons. Nevertheless, the weak coupling implied by direct detection limits creates some tension for the thermal production paradigm, which requires a weak (that is order picobarn) annihilation cross section for dark matter particles. This tension needs to be resolved by more complicated model building, e.g. by arranging for resonant annihilation or for co-annihilation. Jesterhttp://www.blogger.com/profile/08947218566941608850noreply@blogger.com90tag:blogger.com,1999:blog-2846514233477399562.post-71842618312994670222016-09-01T01:19:00.000+01:002018-03-15T12:21:05.339+01:00Next stop: tthThis was a summer of brutally dashed hopes for a quick discovery of many fundamental particles that we were imagining. For the time being we need to focus on the ones that actually exist, such as the Higgs boson. In the Run-1 of the LHC, the Higgs existence and identity were firmly established, while its mass and basic properties were measured. The signal was observed with large significance in 4 different decay channels (γγ, ZZ*, WW*, ττ), and two different production modes (gluon fusion, vector-boson fusion) have been isolated. Still, there remains many fine details to sort out. The realistic goal for the Run-2 is to pinpoint the following Higgs processes:<br />
<ul>
<li><b>(h→bb):</b> Decays to b-quarks.</li>
<li><b>(Vh):</b> Associated production with W or Z boson. </li>
<li><b>(tth):</b> Associated production with top quarks. </li>
</ul>
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFyat1Mvbw9sLfpskSkpMO9BYDs_o3lv7obyoB70Njhr1T0lQu_MLaLOCYuu4TADPp4AXRF0uOCRVm3R7zg8j-ONYAKfHItDr8Ux1AA09zAzrfp1tXjoE1F_ZYM31LEWbuUGmA_wUxZZ_s/s1600/tth.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="172" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFyat1Mvbw9sLfpskSkpMO9BYDs_o3lv7obyoB70Njhr1T0lQu_MLaLOCYuu4TADPp4AXRF0uOCRVm3R7zg8j-ONYAKfHItDr8Ux1AA09zAzrfp1tXjoE1F_ZYM31LEWbuUGmA_wUxZZ_s/s320/tth.png" width="320" /></a>It seems that the last objective may be achieved quicker than expected. The<i> tth</i> production process is very interesting theoretically, because its rate is proportional to the (square of the) Yukawa coupling between the Higgs boson and top quarks. Within the Standard Model, the value of this parameter is known to a good accuracy, as it is related to the mass of the top quark. But that relation can be disrupted in models beyond the Standard Model, with the two-Higgs-doublet model and composite/little Higgs models serving as prominent examples. Thus, measurements of the top Yukawa coupling will provide a crucial piece of information about new physics. <br />
<br />
In the Run-1, a not-so-small signal of <i>tth</i> production was observed by the ATLAS and CMS collaborations in several channels. Assuming that Higgs decays have the same branching fraction as in the Standard Model, the <i>tth</i> signal strength normalized to the Standard Model prediction was <a href="http://arxiv.org/abs/1606.02266">estimated</a> as <br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpcn1c4f9Ch-LqTn2iPcDDF485Y7QIvmbCq5rtULBtHHxS5wqimei3629EYVl78Fmq9MZPEg8hykyXPf4GPWLHpmwrPZX-AzT4Q3JujiTSZc9Syrj4O_lE-KfLRdO3x8RtKxcb1jA2DD-Z/s1600/latex-image-2.jpeg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="40" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpcn1c4f9Ch-LqTn2iPcDDF485Y7QIvmbCq5rtULBtHHxS5wqimei3629EYVl78Fmq9MZPEg8hykyXPf4GPWLHpmwrPZX-AzT4Q3JujiTSZc9Syrj4O_lE-KfLRdO3x8RtKxcb1jA2DD-Z/s320/latex-image-2.jpeg" width="320" /></a></div>
<br />
At face value, a strong evidence for the <i>tth</i> production was obtained in the Run-1! This fact was not advertised by the collaborations because the measurement is not clean due to a large number of top quarks produced by other processes at the LHC. The <i>tth</i> signal is thus a small blip on top of a huge background, and it's not excluded that some unaccounted for systematic errors are skewing the measurements. The collaborations thus preferred to play it safe, and wait for more data to be collected.<br />
<br />
In the Run-2 with 13 TeV collisions the <i>tth</i> production cross section is 4-times larger than in the Run-1, therefore the new data are coming at a fast pace. Both ATLAS and CMS presented their first Higgs results in early August, and the <i>tth</i> signal is only getting stronger. ATLAS showed their measurements in the γγ, WW/ττ, and bb final states of Higgs decay, as well as their combination:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh78pVzpK8I8NNl2d7YA2cbzYCV6AIyU2zgXfck9Ipz5Tz9W4Au0Bc_lyVRQDcZpads0rSyuyCXZt61G9TI5IrVcZTN_RfWHBZCZgI2t369lvePmW7xeUAHEWGIK2gllDvA05JYCuO_3o5M/s1600/ATLAStthcomb.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="462" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh78pVzpK8I8NNl2d7YA2cbzYCV6AIyU2zgXfck9Ipz5Tz9W4Au0Bc_lyVRQDcZpads0rSyuyCXZt61G9TI5IrVcZTN_RfWHBZCZgI2t369lvePmW7xeUAHEWGIK2gllDvA05JYCuO_3o5M/s640/ATLAStthcomb.png" width="640" /></a></div>
Most channels display a signal-like excess, which is reflected by the Run-2 combination being 2.5 sigma away from zero. A similar picture is emerging in CMS, with 2-sigma signals in the γγ and WW/ττ channels. Naively combining all Run-1 and and Run-2 results one then finds<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRiuMNl9utd9hMaCMJyap4kWQKo3_rPB_bmChBR2F-_uF-BSNUtdtJTq2a5P7S2d8JRer1T-O0WZ5uco2Ahyi2KxB79ervaUHBGBYgrWXBI5WDPTfjAnXgmEn5gyFhBmHCojyPL1zgVIrx/s1600/latex-image-3.jpeg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="35" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRiuMNl9utd9hMaCMJyap4kWQKo3_rPB_bmChBR2F-_uF-BSNUtdtJTq2a5P7S2d8JRer1T-O0WZ5uco2Ahyi2KxB79ervaUHBGBYgrWXBI5WDPTfjAnXgmEn5gyFhBmHCojyPL1zgVIrx/s320/latex-image-3.jpeg" width="320" /></a></div>
At face value, this is a discovery! Of course, this number should be treated with some caution because, due to large systematic errors, a naive Gaussian combination may not represent very well the true likelihood. Nevertheless, it indicates that, if all goes well, the discovery of the <i>tth</i> production mode should be officially announced in the near future, maybe even this year.<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhx70Hl7e4GsZMRnEXUUKmr0bbjrO0ZxapoKf_82GJ4lkBl2xgz8lwtV2w7obvhmwX4N7fvOuef_L1IFLj3sseq_ywlj01ausPMtSTzYojRrSlQkqIjsga9rbUihv0kJZ2j1zln0NVQ8XEf/s1600/ggh.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em; text-align: center;"><img border="0" height="86" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhx70Hl7e4GsZMRnEXUUKmr0bbjrO0ZxapoKf_82GJ4lkBl2xgz8lwtV2w7obvhmwX4N7fvOuef_L1IFLj3sseq_ywlj01ausPMtSTzYojRrSlQkqIjsga9rbUihv0kJZ2j1zln0NVQ8XEf/s200/ggh.png" width="200" /></a>Should we get excited that the measured <i>tth</i> rate is significantly larger than Standard Model one? Assuming that the current central value remains, it would mean that the top Yukawa coupling is 40% larger than that predicted by the Standard Model. This is not impossible, but very unlikely in practice. The reason is that the top Yukawa coupling also controls the gluon fusion - the main Higgs production channel at the LHC - whose rate is measured to be in perfect agreement with the Standard Model. Therefore, a realistic model that explains the large<i> tth</i> rate would also have to provide negative contributions to the gluon fusion amplitude, so as to cancel the effect of the large top Yukawa coupling. It is possible to engineer such a cancellation in concrete models, but I'm not aware of any construction where this conspiracy arises in a natural way. Most likely, the currently observed excess is a statistical fluctuation (possibly in combination with underestimated theoretical and/or experimental errors), and the central value will drift toward μ=1 as more data is collected. </div>
Jesterhttp://www.blogger.com/profile/08947218566941608850noreply@blogger.com14tag:blogger.com,1999:blog-2846514233477399562.post-79387693103398008892016-07-29T01:02:00.000+01:002016-09-03T11:44:48.660+01:00After the hangover <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNkAgyGObBxUnKuq_Qv4nE1am6PE8ieGwhO7aKUOG87A-AAf-leDU4YB7ms5OclyfDUVo8ZFZUwNykMIxE5L_A1TlAqbkQFf3YBH3YfrqEkrAnq3hyXEWd42bNWVeFPvjkbpBvsjJotMxb/s1600/ATL13y1516gaga_S0.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="311" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNkAgyGObBxUnKuq_Qv4nE1am6PE8ieGwhO7aKUOG87A-AAf-leDU4YB7ms5OclyfDUVo8ZFZUwNykMIxE5L_A1TlAqbkQFf3YBH3YfrqEkrAnq3hyXEWd42bNWVeFPvjkbpBvsjJotMxb/s320/ATL13y1516gaga_S0.png" width="320" /></a>The loss of the 750 GeV diphoton resonance is a big blow to the particle physics community. We are currently going through the 5 stages of grief, everyone at their own pace, as can be seen e.g. in this <a href="http://resonaances.blogspot.ch/2016/06/game-of-thrones-750-gev-edition.html#comments">comments section</a>. Nevertheless, it may already be a good moment to revisit the story one last time, so as to understand what went wrong.<br />
<br />
In the recent years, physics beyond the Standard Model has seen 2 other flops of comparable impact: the faster-than-light neutrinos in OPERA, and the CMB tensor fluctuations in BICEP. Much as the diphoton signal, both of the above triggered a binge of theoretical explanations, followed by a massive hangover. There was one big difference, however: the OPERA and BICEP signals were due to embarrassing errors on the experiments' side. This doesn't seem to be the case for the diphoton bump at the LHC. Some may wonder whether the Standard Model background may have been slightly underestimated, or whether one experiment may have been biased by the result of the other... But, most likely, the 750 GeV bump was just due to a random fluctuation of the background at this particular energy. Regrettably, the resulting mess cannot be blamed on experimentalists, who were in fact downplaying the anomaly in their official communications. This time it's the theorists who have some explaining to do.<br />
<br />
Why did theorists write 500 papers about a statistical fluctuation? One reason is that it didn't look like one at first sight. Back in December 2015, the local significance of the diphoton bump in ATLAS run-2 data was 3.9 sigma, which means the probability of such a fluctuation was 1 in 10000. Combining available run-1 and run-2 diphoton data in ATLAS and CMS, the <i>local </i>significance was increased to 4.4 sigma. All in all, it was a very unusual excess, a 1-in-100000 occurrence! Of course, this number should be interpreted with care. The point is that the LHC experiments perform gazillion different measurements, thus they are bound to observe seemingly unlikely outcomes in a small fraction of them. This can be partly taken into account by calculating the <i>global</i> significance, which is the probability of finding a background fluctuation of the observed size anywhere in the diphoton spectrum. The global significance of the 750 GeV bump quoted by ATLAS was only about two sigma, the fact strongly emphasized by the collaboration. However, that number can be misleading too. One problem with the global significance is that, unlike for the local one, it cannot be easily combined in the presence of separate measurements of the same observable. For the diphoton final state we have ATLAS and CMS measurements in run-1 and run-2, thus 4 independent datasets, and their robust concordance was crucial in creating the excitement. Note also that what is really relevant here is the probability of a fluctuation of a given size in <i>any </i>of the LHC measurement, and that is not captured by the global significance. For these reasons, I find it more transparent work with the local significance, remembering that it should <i>not </i>be interpreted as the probability that the Standard Model is incorrect. By these standards, a 4.4 sigma fluctuation in a combined ATLAS and CMS dataset is still a very significant effect which deserves a special attention. What we learned the hard way is that such large fluctuations do happen at the LHC... This lesson will certainly be taken into account next time we encounter a significant anomaly. <br />
<br />
Another reason why the 750 GeV bump was exciting is that the measurement is rather straightforward. Indeed, at the LHC we often see anomalies in complicated final states or poorly controlled differential distributions, and we treat those with much skepticism. But a resonance in the diphoton spectrum is almost the simplest and cleanest observable that one can imagine (only a dilepton or 4-lepton resonance would be cleaner). We already successfully discovered one particle this way - that's how the Higgs boson first showed up in 2011. Thus, we have good reasons to believe that the collaborations control this measurement very well.<br />
<br />
Finally, the diphoton bump was so attractive because theoretical explanations were plausible. It was trivial to write down a model fitting the data, there was no need to stretch or fine-tune the parameters, and it was quite natural that the particle first showed in as a diphoton resonance and not in other final states. This is in stark contrast to other recent anomalies which typically require a great deal of gymnastics to fit into a consistent picture. The only thing to give you a pause was the tension with the LHC run-1 diphoton data, but even that became mild after the Moriond update this year.<br />
<br />
So we got a huge signal of a new particle in a clean channel with plausible theoretic models to explain it... that was a really bad luck. My conclusion may not be shared by everyone but I don't think that the theory community committed major missteps in this case. Given that for 30 years we have been looking for a clue about the fundamental theory beyond the Standard Model, our reaction was not disproportionate once a seemingly reliable one had arrived. Excitement is an inherent part of physics research. And so is disappointment, apparently.<br />
<br />
There remains a question whether we really needed 500 papers... Well, of course not: many of them fill an important gap. Yet many are an interesting read, and I personally learned a lot of exciting physics from them. Actually, I suspect that the fraction of useless papers among the 500 is lower than for regular daily topics. On a more sociological side, these papers exacerbate the problem with our citation culture (mass-grave references), which undermines the citation count as a means to evaluate the research impact. But that is a wider issue which I don't know how to address at the moment. <br />
<br />
Time to move on. The <a href="http://indico.cern.ch/event/432527/overview">ICHEP</a> conference is coming next week, with loads of brand new results based on up to 16 inverse femtobarns of 13 TeV LHC data. Although the rumor is that there is no new exciting anomaly at this point, it will be interesting to see how much room is left for new physics. The hope lingers on, at least until the end of this year.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">In the comments section y</span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">ou're welcome to lash out on the entire BSM community - we made a wrong call so we deserve it. Please, however, avoid personal attacks (unless on me). Alternatively, you can also give us a hug :) </span>Jesterhttp://www.blogger.com/profile/08947218566941608850noreply@blogger.com145tag:blogger.com,1999:blog-2846514233477399562.post-75135116543083471682016-06-18T12:10:00.001+01:002016-09-03T11:44:15.124+01:00Game of Thrones: 750 GeV edition The 750 GeV diphoton resonance has made a big impact on theoretical particle physics. The number of papers on the topic is already legendary, and they keep coming at the rate of order 10 per week. Given that the <a href="https://arxiv.org/abs/1603.01204">Backović model</a> is falsified, there's no longer a theoretical upper limit. Does this mean we are not dealing with the classical ambulance chasing scenario? The answer may be known in the next days.<br />
<br />
So who's leading this race? What kind of question is that, you may shout, of course it's Strumia! And you would be wrong, independently of the metric. For this contest, I will consider two different metrics: the <i>King Beyond the Wall</i> that counts the number of papers on the topic, and the <i>Iron Throne</i> that counts how many times these papers have been cited.<br />
<br />
In the first category, the contest is much more fierce than one might expect: it takes 8 papers to be the leader, and 7 papers may not be enough to even get on the podium! Among the 3 authors with 7 papers the final classification is decided by <strike>trial by combat</strike> the citation count. The result is (<i>drums</i>):<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTTxI87m6NCI4N5I8l6QfvID3sRzKhweX8Y7vn6kBYdqWV16TkQ5GBrn-qKV8RnPr0mFjl8i9Kk1NdJTfSTnhwZd5MyyTe7AY7eFFkSxJa7N6dvgIotqlXWj_OoDBv8E-_xsKRM5JSRPHM/s1600/kingbeyondthewallat750_june.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTTxI87m6NCI4N5I8l6QfvID3sRzKhweX8Y7vn6kBYdqWV16TkQ5GBrn-qKV8RnPr0mFjl8i9Kk1NdJTfSTnhwZd5MyyTe7AY7eFFkSxJa7N6dvgIotqlXWj_OoDBv8E-_xsKRM5JSRPHM/s400/kingbeyondthewallat750_june.png" width="400" /></a></div>
<br />
Citations, tja... Social dynamics of our community encourages referencing all previous work on the topic, rather than just the relevant ones, which in this particular case triggered a period of inflation. One day soon citation numbers will mean as much as authorship in experimental particle physics. But for now the size of the h-factor is still an important measure of virility for theorists. If the citation count rather the number of papers is the main criterion, the iron throne is <a href="http://inspirehep.net/search?ln=en&ln=en&p=a+sanz%2Cv+and+%28t+750+or++diphoton+or+digamma+or+di-photon%29+and+date+after+2014&of=hb&action_search=Search&sf=earliestdate&so=d&rm=&rg=25&sc=0">taken</a> by a Targaryen contender (<i>trumpets</i>):<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwUx5kIuJTYY7TLFLkFUF2pftFINWZ60-2LlwV9MyEP5y2XwtjK9pdU5Oq92y4QAGxvfVFgIBStB1t39QBsRZRakqj3mobzpOG0Nvswaz1Xfx-YjXDIwyIftVdwdDAXvV4KnIEmptctquO/s1600/gameofthrones750.002.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwUx5kIuJTYY7TLFLkFUF2pftFINWZ60-2LlwV9MyEP5y2XwtjK9pdU5Oq92y4QAGxvfVFgIBStB1t39QBsRZRakqj3mobzpOG0Nvswaz1Xfx-YjXDIwyIftVdwdDAXvV4KnIEmptctquO/s400/gameofthrones750.002.jpg" width="400" /></a></div>
<br />
This explains why the resonance is usually denoted by the letter S.<br />
<br />
<i>Update 09.08.2016.</i> Now that the 750 GeV excess is officially dead, one can give the final classification. The race for the iron throne was tight till the end, but there could only be one winner:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheDM2PLeGtyqwwBoe0j5NzC_03JfukhXhyANm04cF0Td6raadNCWn0xuLj9uf0rywM3v9FnhyhHI-PNod3RIvR29tlqg3BLFH1t4G8aNLM_-H8e35u7PUpIjF1WYeqwTIu1sfFVZ5dj_eK/s1600/ironthroneat750_2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheDM2PLeGtyqwwBoe0j5NzC_03JfukhXhyANm04cF0Td6raadNCWn0xuLj9uf0rywM3v9FnhyhHI-PNod3RIvR29tlqg3BLFH1t4G8aNLM_-H8e35u7PUpIjF1WYeqwTIu1sfFVZ5dj_eK/s400/ironthroneat750_2.png" width="400" /></a></div>
As you can see, in this race the long-term strategy and persistence proved to be more important than pulling off a few early victories. In the other category there have also been changes in the final stretch: the winner added 3 papers in the period between the un-official and official announcement of the demise of the 750 GeV resonance. The final standing are: <br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvAHSgncGplPsIVPb8f7_C3Kpd6NL1OjlRNiCZtCk5XBsSBJFYDClolgO5aReIjZgE1Cw0ICjbYVIDUJpksGeenRHPvaY1Grq5RI7bWgOR8ybloAc0bjgSiDB-O3YuxcrS90U8LA-Oyv8-/s1600/kingbeyondthewallat750.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvAHSgncGplPsIVPb8f7_C3Kpd6NL1OjlRNiCZtCk5XBsSBJFYDClolgO5aReIjZgE1Cw0ICjbYVIDUJpksGeenRHPvaY1Grq5RI7bWgOR8ybloAc0bjgSiDB-O3YuxcrS90U8LA-Oyv8-/s400/kingbeyondthewallat750.png" width="400" /></a></div>
<br />
<br />
Congratulations for all the winners. For all the rest, wish you more luck and persistence in the next edition, provided it will take place.<br />
<br />
<br />Jesterhttp://www.blogger.com/profile/08947218566941608850noreply@blogger.com141tag:blogger.com,1999:blog-2846514233477399562.post-73624960292748099912016-06-10T18:37:00.001+01:002016-09-03T11:44:27.879+01:00Black hole dark matterThe idea that dark matter is made of primordial black holes is very old but has always been in the backwater of particle physics. The WIMP or asymmetric dark matter paradigms are preferred for several reasons such as calculability, observational opportunities, and a more direct connection to cherished theories beyond the Standard Model. But in the recent months there has been more interest, triggered in part by the LIGO observations of black hole binary mergers. In the first observed event, the mass of each of the black holes was estimated at around 30 solar masses. While such a system may well be of boring astrophysical origin, it is somewhat unexpected because typical black holes we come across in everyday life are either a bit smaller (around one solar mass) or much larger (supermassive black hole in the galactic center). On the other hand, if the dark matter halo were made of black holes, scattering processes would sometimes create short-lived binary systems. Assuming a significant fraction of dark matter in the universe is made of primordial black holes, <a href="https://arxiv.org/abs/1603.00464">this paper</a> estimated that the rate of merger processes is in the right ballpark to explain the LIGO events.<br />
<br />
Primordial black holes can form from large density fluctuations in the early universe. On the largest observable scales the universe is incredibly homogenous, as witnessed by the uniform temperature of the Cosmic Microwave Background over the entire sky. However on smaller scales the primordial inhomogeneities could be much larger without contradicting observations. From the fundamental point of view, large density fluctuations may be generated by several distinct mechanism, for example during the final stages of inflation in the waterfall phase in the hybrid inflation scenario. While it is rather generic that this or similar process may seed black hole formation in the radiation-dominated era, severe fine-tuning is required to produce the right amount of black holes and ensure that the resulting universe resembles the one we know.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGpbw6TKAQrdfC-zA_WvbI6nA2fhIzxi7HbH2th33IRjBTgI0jLvfMe5ul1vQomp8jaunX9J3xfOWbk1KW_0sjUyl1P7Nw9KXjqCEXllBO6MuHT7-fEdcOsWgkN5bAXeOlUrBTnG7D-hH7/s1600/blackholedarkmatterconstraints.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="297" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGpbw6TKAQrdfC-zA_WvbI6nA2fhIzxi7HbH2th33IRjBTgI0jLvfMe5ul1vQomp8jaunX9J3xfOWbk1KW_0sjUyl1P7Nw9KXjqCEXllBO6MuHT7-fEdcOsWgkN5bAXeOlUrBTnG7D-hH7/s400/blackholedarkmatterconstraints.png" width="400" /></a>All in all, it's fair to say that the scenario where all or a significant fraction of dark matter is made of primordial black holes is not completely absurd. Moreover, one typically expects the masses to span a fairly narrow range. Could it be that the LIGO events is the first indirect detection of dark matter made of O(10)-solar-mass black holes? One problem with this scenario is that it is excluded, as can be seen in the plot. Black holes sloshing through the early dense universe accrete the surrounding matter and produce X-rays which could ionize atoms and disrupt the Cosmic Microwave Background. In the 10-100 solar mass range relevant for LIGO this effect currently gives the strongest constraint on primordial black holes: according to <a href="http://arxiv.org/abs/0709.0524">this paper</a> they are allowed to constitute not more than 0.01% of the total dark matter abundance. In astrophysics, however, not only signals but also constraints should be taken with a grain of salt. In this particular case, the word in town is that the derivation contains a numerical error and that the corrected limit is 2 orders of magnitude less severe than what's shown in the plot. Moreover, this limit strongly depends on the model of accretion, and more favorable assumptions may buy another order of magnitude or two. All in all, the possibility of dark matter made of primordial black hole in the 10-100 solar mass range should not be completely discarded yet. Another possibility is that black holes make only a small fraction of dark matter, but the merger rate is faster, closer to the estimate of <a href="https://arxiv.org/abs/1603.08338">this paper</a>.<br />
<br />
Assuming this is the true scenario, how will we know? Direct detection of black holes is discouraged, while the usual cosmic ray signals are absent. Instead, in most of the mass range, the best probes of primordial black holes are various lensing observations. For LIGO black holes, progress may be made via observations of fast radio bursts. These are strong radio signals of (probably) extragalactic origin and millisecond duration. The radio signal passing near a O(10)-solar-mass black hole could be strongly lensed, <a href="https://arxiv.org/abs/1605.00008">leading</a> to repeated signals detected on Earth with an observable time delay. In the near future we should observe hundreds of such repeated bursts, or obtain new strong constraints on primordial black holes in the interesting mass ballpark. Gravitational wave astronomy may offer another way. When more statistics is accumulated, we will be able to say something about the spatial distributions of the merger events. Primordial black holes should be distributed like dark matter halos, whereas astrophysical black holes should be correlated with luminous galaxies. Also, the typical eccentricity of the astrophysical black hole binaries should be different. With some luck, the primordial black hole dark matter scenario may be vindicated or robustly excluded in the near future.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">See also <a href="https://indico.in2p3.fr/event/12773/session/5/contribution/7/material/slides/0.pdf">these</a> <a href="https://indico.in2p3.fr/event/12773/session/5/contribution/11/material/slides/0.pdf">slides</a> for more details. </span>Jesterhttp://www.blogger.com/profile/08947218566941608850noreply@blogger.com34tag:blogger.com,1999:blog-2846514233477399562.post-31157959180696168412016-05-27T16:24:00.000+01:002016-07-06T10:00:10.622+01:00CMS: Higgs to mu tau is going awayOne interesting anomaly in the LHC run-1 was a hint of Higgs boson decays to a muon and a tau lepton. Such process is forbidden in the Standard Model by the conservation of muon and tau lepton numbers. Neutrino masses violate individual lepton numbers, but their effect is far too small to affect the Higgs decays in practice. On the other hand, new particles do not have to respect global symmetries of the Standard Model, and they could induce lepton flavor violating Higgs decays at an observable level. Surprisingly, CMS <a href="http://arxiv.org/abs/1502.07400">found</a> a small excess in the Higgs to tau mu search in their 8 TeV data, with the measured branching fraction Br(h→τμ)=(0.84±0.37)%. The analogous <a href="https://arxiv.org/abs/1604.07730">measurement</a> in ATLAS is 1 sigma above the background-only hypothesis, Br(h→τμ)=(0.53±0.51)%. Together this merely corresponds to a 2.5 sigma excess, so it's not too exciting in itself. However, taken together with the B-meson anomalies in LHCb, it has raised hopes for lepton flavor violating new physics just around the corner. For this reason, the CMS excess inspired a few dozen of theory papers, with Z' bosons, leptoquarks, and additional Higgs doublets pointed out as possible culprits. <br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDPLqhl4sFGPUmc0XP9wsCP0LhZYT6Rgz1fyeeQRvPSjQxHuGER32PYxgeGo5zgJdq2A2uRF-ev3dHKvV1frf0jDPizXWMsRmwhEirl-RkpkqBrH3QiLXxRI0yeAaA61OmLj5oM9EJKLZd/s1600/higgstomutaucombo.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="243" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDPLqhl4sFGPUmc0XP9wsCP0LhZYT6Rgz1fyeeQRvPSjQxHuGER32PYxgeGo5zgJdq2A2uRF-ev3dHKvV1frf0jDPizXWMsRmwhEirl-RkpkqBrH3QiLXxRI0yeAaA61OmLj5oM9EJKLZd/s400/higgstomutaucombo.png" width="400" /></a>Alas, the wind is changing. CMS made a search for h→τμ in their small stash of 13 TeV data collected in 2015. This time they were hit by a <i>negative</i> background fluctuation, and they <a href="https://indico.cern.ch/event/527663/contributions/2168318/attachments/1274703/1893958/Cepeda.pdf">found</a> Br(h→τμ)=(-0.76±0.81)%. The accuracy of the new measurement is worse than that in run-1, but nevertheless it lowers the combined significance of the excess below 2 sigma. Statistically speaking, the situation hasn't changed much, but psychologically this is very discouraging. A true signal is expected to grow when more data is added, and when it's the other way around it's usually a sign that we are dealing with a statistical fluctuation... <br />
<br />
So, if you have a cool model explaining the h→τμ excess be sure to post it on arXiv before more run-2 data is analyzed ;)Jesterhttp://www.blogger.com/profile/08947218566941608850noreply@blogger.com19tag:blogger.com,1999:blog-2846514233477399562.post-20838542517084293252016-05-14T12:30:00.000+01:002016-09-03T13:20:33.480+01:00Plot for Weekend: new limits on neutrino masses This weekend's plot shows the new <a href="http://arxiv.org/abs/1605.02889">limits</a> on neutrino masses from the KamLAND-Zen experiment:<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBg_aZTLb4QYBT7eAiJE68aJSXxn-B6W_ytM29TgC6OMDB3wqn6UK3k14ie7mywbFYZQBSXXG9R3ZF0lclTn9TGdrEJCRGOQ7XPyRZe6cLxxNPo86lCaNP6U6aOebvvWAuZ4mM-tAB31Ws/s1600/KamLANDzen.jpg" imageanchor="1"><img border="0" height="370" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBg_aZTLb4QYBT7eAiJE68aJSXxn-B6W_ytM29TgC6OMDB3wqn6UK3k14ie7mywbFYZQBSXXG9R3ZF0lclTn9TGdrEJCRGOQ7XPyRZe6cLxxNPo86lCaNP6U6aOebvvWAuZ4mM-tAB31Ws/s400/KamLANDzen.jpg" width="400" /></a><br />
KamLAND-Zen is a group of buddhist monks studying a balloon filled with the xenon isotope Xe136. That isotope has a very long lifetime, of order 10^21 years, and undergoes the lepton-number-conserving double beta decay Xe136 → Ba136 2e- 2νbar. What the monks hope to observe is the lepton violating neutrinoless double beta decay Xe136 → Ba136+2e, which would show as a peak in the invariant mass distribution of the electron pairs near 2.5 MeV. No such signal has been observed, which sets the limit on the half-life for this decay at T>1.1*10^26 years.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEha6H64QkMYOw8-qFbFWtu1F2EelW7sOZMlBozzbNMGUBBJDicSqMlgNTYbhMdQYZ27UEubJaMiZ389B_vIJV5NkM1cmZ318_UiLL4KH7u5qOBBCpE06ARsg_1N4McfQ8DaF3_ow5vEFH3m/s1600/neutrinohierarchy.jpeg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEha6H64QkMYOw8-qFbFWtu1F2EelW7sOZMlBozzbNMGUBBJDicSqMlgNTYbhMdQYZ27UEubJaMiZ389B_vIJV5NkM1cmZ318_UiLL4KH7u5qOBBCpE06ARsg_1N4McfQ8DaF3_ow5vEFH3m/s400/neutrinohierarchy.jpeg" /></a>The neutrinoless decay is predicted to occur if neutrino masses are of Majorana type, and the rate can be characterized by the effective mass Majorana mββ (y-axis in the plot). That parameter is a function of the masses and mixing angles of the neutrinos. In particular it depends on the mass of the lightest neutrino (x-axis in the plot) which is currently unknown. Neutrino oscillations experiments have precisely measured the mass^2 differences between neutrinos, which are roughly (0.05 eV)^2 and (0.01 eV)^2. But oscillations are not sensitive to the absolute mass scale; in particular, the lightest neutrino may well be massless for all we know. If the heaviest neutrino has a small electron flavor component, then we expect that the mββ parameter is below 0.01 eV. This so-called <i>normal hierarchy</i> case is shown as the red region in the plot, and is clearly out of experimental reach at the moment. On the other hand, in the <i>inverted hierarchy</i> scenario (green region in the plot), it is the two heaviest neutrinos that have a significant electron component. In this case, the effective Majorana mass mββ is around 0.05 eV. Finally, there is also the degenerate scenario (funnel region in the plot) where all 3 neutrinos have very similar masses with small splittings, however this scenario is now strongly disfavored by cosmological limits on the sum of the neutrino masses (e.g. the Planck limit Σmν < 0.16 eV).<br />
<br />
As can be seen in the plot, the results from KamLAND-Zen, when translated into limits on the effective Majorana mass, almost touch the inverted hierarchy region. The strength of this limit depends on some poorly known nuclear matrix elements (hence the width of the blue band). But even in the least favorable scenario future, more sensitive experiments should be able to probe that region. Thus, there is a hope that within the next few years we may prove the Majorana nature of neutrinos, or at least disfavor the inverted hierarchy scenario. <br />
<div>
<br /></div>
Jesterhttp://www.blogger.com/profile/08947218566941608850noreply@blogger.com36tag:blogger.com,1999:blog-2846514233477399562.post-87577818727040508332016-05-09T23:37:00.002+01:002016-06-22T16:27:59.083+01:00Off we goThe LHC is back in action since last weekend, again <a href="https://op-webtools.web.cern.ch/vistar/vistars.php?usr=LHC3">colliding</a> protons with 13 TeV energy. The weasels' conspiracy was foiled, and the perpetrators were exemplarily electrocuted. PhD students have been deployed around the LHC perimeter to counter any further sabotage attempts (stoats are <a href="https://en.wikipedia.org/wiki/The_Wind_in_the_Willows">known</a> to have been in league with weasels in the past). The period that begins now may prove to be the most exciting time for particle physics in this century. Or the most disappointing.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieAYZwe5Z-G3sAFMiPMGfC9cgLInuF5YLg8tfm3bOSpW6kAnSxfgnMMxsiFXn2tn0lS1uDQRqxoVrEpXuOuMOVqfiu1c5PGdvcl5EeHbyXsw1aRfOG0wbXaC85kzB_jrEznIAAqXzcG0Tz/s1600/int_lumi_per_day_cumulative_pp_2016.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieAYZwe5Z-G3sAFMiPMGfC9cgLInuF5YLg8tfm3bOSpW6kAnSxfgnMMxsiFXn2tn0lS1uDQRqxoVrEpXuOuMOVqfiu1c5PGdvcl5EeHbyXsw1aRfOG0wbXaC85kzB_jrEznIAAqXzcG0Tz/s320/int_lumi_per_day_cumulative_pp_2016.png" width="320" /></a>The beam intensity is still a factor of 10 below the nominal one, so the harvest of last weekend is meager 40 inverse picobarns. But the number of proton bunches in the beam is quickly increasing, and once it reaches O(2000), the data will stream at a rate of a femtobarn per week or more. For the nearest future, the plan is to have a few inverse femtobarns on tape by mid-July, which would roughly double the current 13 TeV dataset. The first analyses of this chunk of data should be presented around the time of the ICHEP conference in early August. At that point we will know whether the 750 GeV particle is real. Celebrations will begin if the significance of the diphoton peak increases after adding the new data, even if the statistics is not enough to officially announce a discovery. In the best of all worlds, we may also get a hint of a matching 750 GeV peak in another decay channel (ZZ, Z-photon, dilepton, t-tbar,...) which would help focus our model building. On the other hand, if the significance of the diphoton peak drops in August, there will be a massive hangover...<br />
<br />
By the end of October, when the 2016 proton collisions are scheduled to end, the LHC hopes to collect some 20 inverse femtobarns of data. This should already give us a rough feeling of new physics within the reach of the LHC. If a hint of another resonance is seen at that point, one will surely be able to confirm or refute it with the data collected in the following years. If nothing is seen... then you should start telling yourself that condensed matter physics is also sort of fundamental, or that systematic uncertainties in astrophysics are not so bad after all... In any scenario, by December, when first analyses of the full 2016 dataset will be released, we will know infinitely more than we do today. <br />
<br />
So fasten your seat belts and get ready for a (hopefully) bumpy ride. Serious rumors should start showing up on blogs and twitter starting from July.Jesterhttp://www.blogger.com/profile/08947218566941608850noreply@blogger.com32tag:blogger.com,1999:blog-2846514233477399562.post-66930401402652897362016-04-01T10:11:00.000+01:002016-05-19T16:42:01.832+01:00April Fools' 16: Was LIGO a hack? <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2gbgtA4EsBBzR_B73olvZTTXwLLVtpIf03xwbShlwa9JKWA-_k_2iINPzURNlje9HfE9D4nwoR1OiwfwUn6CnuJOyRGoC7Mdnn1Ys0YhL-Eaue7xsiAgzmGbLim7Sb-YcCBX-tW0v5-p5/s1600/ligohack.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2gbgtA4EsBBzR_B73olvZTTXwLLVtpIf03xwbShlwa9JKWA-_k_2iINPzURNlje9HfE9D4nwoR1OiwfwUn6CnuJOyRGoC7Mdnn1Ys0YhL-Eaue7xsiAgzmGbLim7Sb-YcCBX-tW0v5-p5/s320/ligohack.png" width="320" /></a><br />
<i>This post is an April Fools' joke. LIGO's gravitational waves are for real. At least I hope so ;) </i><br />
<br />
We have had recently a few scientific embarrassments, where a big discovery announced with great fanfares was subsequently overturned by new evidence. We still remember OPERA's faster than light neutrinos which turned out to be a loose cable, or BICEP's gravitational waves from inflation, which turned out to be galactic dust emission... It seems that another such embarrassment is coming our way: the recent LIGO's discovery of gravitational waves emitted in a black hole merger may share a similar fate. There are reasons to believe that the experiment was hacked, and the signal was injected by a prankster. <br />
<br />
From the beginning, one reason to be skeptical about LIGO's discovery was that the signal seemed too beautiful to be true. Indeed, the experimental curve looked as if taken out of a textbook on general relativity, with a clearly visible chirp signal from the inspiral phase, followed by a ringdown signal when the merged black hole relaxes to the Kerr state. The reason may be that it *is* taken out of a textbook. This is at least what is strongly suggested by recent developments.<br />
<br />
On <a href="http://www.evilzone.org/">EvilZone</a>, a well-known hacker's forum, a hacker using a nickname <i>Madhatter</i> was boasting that it was possible to tamper with scientific instruments, including the LHC, the Fermi satellite, and the LIGO interferometer. When challenged, he or she uploaded a piece of code that allows one to access LIGO computers. Apparently, the hacker took advantage the same backdoor that allows the selected members of the LIGO team to inject a fake signal in order to test the analysis chain. This was brought to attention of the collaboration members, who decided to test the code. To everyone's bewilderment, the effect was to reproduce exactly the same signal in the LIGO apparatus as the one observed in September last year!<br />
<br />
Even though the traces of a hack cannot be discovered, there is little doubt now that there was a foul play involved. It is not clear what was the motif of the hacker: was it just a prank, or maybe an elaborate plan to discredit the scientists. What is even more worrying is that the same thing could happen in other experiments. The rumor is that the ATLAS and CMS collaborations are already checking whether the 750 GeV diphoton resonance signal could also be injected by a hacker. Jesterhttp://www.blogger.com/profile/08947218566941608850noreply@blogger.com15