Here is a brief report about the long-awaited combination of the ATLAS and CMS Higgs search results using about 2 inverse femtobarn of data collected last summer. Overshadowed by faster-than-light neutrinos, that result was presented today at the HCP conference in Paris. It had been expected with as much thrill as results of parliamentary elections in the former Soviet Union. Indeed, in this fast-moving world 2 months ago is infinite past. Today particle physicists are rather busy rumoring the results based on the entire LHC data set of 5 inverse femtobarn. Moreover, the combined limits had not been difficult to guess, and a reasonable approximation of the official combination had been long available via viXra log.
Nevertheless, it's the chronicler's duty to report: the Standard Model Higgs is excluded at 95% confidence level for all masses between 141 GeV and 476 GeV.
Meanwhile, ATLAS and CMS have already had the first look at the full data set. Continuing the Soviet analogy, an uneasy rumor is starting among the working class and the lower-ranked party officials. Is the first secretary dead? Or on life support? Or, if he's all right, why he's not showing in public? We expect an official update for the 21st Congress of the Communist Party, sorry, the December CERN Council week. And wild speculations on Twitter well before that :-)
See the public note for more details about the combination.
Friday, 18 November 2011
Monday, 14 November 2011
LHCb has evidence of new physics! Maybe.
It finally happened: we have the first official claim of new physics at the LHC. Amusingly, it comes not from ATLAS or CMS, but from LHCb, a smaller collaboration focused on studying processes with hadrons formed by b- and c-quarks. Physics of heavy quark flavors is a subject for botanists and, if I only could, I would never mention it on this blog. Indeed, a mere thought of the humongous number of b- and c-hadrons and of their possible decay chains gives me migraines. Moreover, all this physics is customarily wrapped up in a cryptic notation such that only the chosen few can decipher the message. Unfortunately, one cannot completely ignore flavor physics because it may be sensitive to new particles beyond the Standard Model, even very heavy ones. This is especially true for CP-violating observables because, compared to small Standard Model contributions, new physics contributions may easily stand out.
So, the news of the day is that LHCb observed direct CP violation in neutral D-meson decays. More precisely, using 0.58 fb-1 of data they measured the difference of time-integrated CP asymmetries of D→ π+π- and D→ K+K- decays. The result is
3.5 sigma away from the Standard Model prediction which is approximately zero!
Here is an explanation in a slightly more human language:
So, the news of the day is that LHCb observed direct CP violation in neutral D-meson decays. More precisely, using 0.58 fb-1 of data they measured the difference of time-integrated CP asymmetries of D→ π+π- and D→ K+K- decays. The result is
3.5 sigma away from the Standard Model prediction which is approximately zero!
Here is an explanation in a slightly more human language:
- Much like b-quarks, c-quarks can form relatively long-lived mesons (quark-antiquark bound states) with lighter quarks. Since mesons containing a b-quark are called B-mesons, those containing a c-quark are, logically, called D-mesons. Among these are 2 electrically neutral mesons: D0 = charm + anti-up quark, and D0bar = anti-charm + up cbar-u quark. CP symmetry relates particles and anti-particles, in this case it relates D0 and D0bar. Note that D0 and D0bar mix, that is they can turn into one another; this is an important and experimentally established phenomenon which in general may be related to CP violation however in the present story it plays a lesser role .
- D-mesons are produced at the LHC with a huge cross-section of a few milibarns. LHCb is especially well equipped to identify and study them. In particular, they can easily tell kaons from pions thanks to their Cherenkov sub-detector.
- Here we are interested in D mesons decays to a CP invariant final state f+f- where f = π,K. Thus, the D0 → f+f- and D0bar → f+f- processes are related by a CP transformation, and we can define the CP asymmetry as
If CP was an exact symmetry of the universe, the asymmetries defined above would be zero: the decay probabilities into pions/kaons of D0 and D0bar would be the same. The Standard Model does violate CP, however its contributions are estimated to be very small in this case, as I explain in the following. - At the Tevatron and B-factories they measured separate measurements of the asymmetries A_CP(π+π-) and A_CP(K+K-) (obtaining results consistent with zero). LHCb quotes only the difference A_CP(K+K-) - A_CP(π+π-) because, at a proton-proton collider, the D0 and D0bar mesons are produced at a different rate. That introduces a spurious asymmetry at the detection level which, fortunately, cancels out in the difference. Besides, the mixing contribution to the asymmetry approximately cancels out in the difference as well. Thus, the observable measured by LHCb is sensitive to so-called direct CP violation (as opposed to indirect CP violation that proceeds via meson-antimeson mixing).
- LHCb has collected 1.1 inverse femtobarn (fb-1) of data, 5 times less than ATLAS and CMS, because the LHCb detector cannot handle as large luminosity. The present analysis uses a half of the available data set. The error of the measurement is still dominated by statistics, so analyzing the full data set will shrink the error by at least Sqrt[2].
- What does the good old Standard Model has to say about these asymmetries? First of all, any CP asymmetry has to arise from interference between 2 different amplitudes entering with different complex phases. In the Standard Model the 2 dominant amplitudes are:
#1: Tree-level weak decay amplitude. The pictured amplitude involves the CKM matrix elements V_us and V_cs, therefore it is suppressed by one power of Cabibbo angle, the parameter whose approximate value is 0.2.
#2: One-loop amplitude which, for reasons that should be kept secret from children, is called the penguin. Again it involves the CKM matrix elements V_us and V_cs, and also a loop suppression factor α_strong/π. However, as is well known, any CP violation in the Standard Model has to involve the 3rd generation quarks, in this case a virtual b-quark in the loop entering via V_cb and V_ub CKM matrix elements.
The corresponding D0 → π+π- amplitudes are of the same order of magnitude. - All in all, the direct CP asymmetry in the D0 → π+π- and D0 → K+K- is parametrically proportional to (α_strong/π) (Vcb*Vub)/(Vus*Vcs) which is suppressed by the 4-th power of the Cabibbo angle and a loop factor. This huge suppression factor leads to an estimate of the Standard Model contribution to the CP asymmetry at the level of 0.01-0.1%. On the other hand, LHCb finds a much larger magnitude of the asymmetry, of order 1%.
- Is it obviously new physics? Experts are not sure because D-mesons are filthy bastards. With the masses around 2 GeV, they sit precisely in the no man's land between perturbative QCD (valid at energies >> GeV) and low-energy chiral perturbation theory (valid between 100 MeV and 1 GeV). For this reason, making precise Standard Model predictions in the D-meson sector is notoriously difficult. It might well be that the above estimates are too naive, for example the penguin diagram may be enhanced by non-calculable QCD effects by a much-larger-than-expected factor.
- And what is it if it indeed is new physics beyond the Standard Model? This was definitely not the most expected place where theorists had expected new physics to show up. Currently there are almost no models on the market that predict CP violation in D0 decays without violating other constraints. I'm aware of one that uses squark-gluino loops to enhance the penguin, let me know about other examples. This gap will surely be filled in the coming weeks, and I will provide an update once new interesting examples are out.
Friday, 11 November 2011
Double Dare
(There's nothing like a little rant on a holiday morning)
Last Wednesday I noticed this press release from Double Chooz which announced "the observation of the disappearance of (anti-)neutrinos in the expected flux observed from the nuclear reactor" which implies "complementary and important evidence of oscillation also involving the third angle". Wow, I thought, they've nailed down theta13! But it turned out to be much more exciting than just another fundamental parameter. A more careful reading reveals that, based on the first 100 days of data, Double Chooz found sin^2(2 theta13) = 0.085 ± 0.051. Clearly something interesting is going on. To an untrained eye, the Double Chooz result is... consistent with zero; moreover it is similar, even if slightly less precise, to the null result from MINOS: sin^2(2 theta13) = 0.04 ± 0.04. However now in the 21st century one needs a more inspired approach to statistics...
To better understand what's going on, go back a few months. In June this year the T2K experiment also issued a press release about theta13, announcing "an indication that muon neutrinos are able to transform into electron neutrino". T2K is an experiment in Japan where a beam of muon neutrinos with GeV energies is produced in J-PARC and sent over a 300km tunnel ;-) to the SuperKamiokande detector. It is established that muon neutrinos can oscillate into tau neutrinos, the process being governed by the theta23 angle in the MNS neutrino mixing matrix whose value is close ot 45 degrees. If the angle theta13 in that same matrix is non-zero then the process ν_μ → ν_e is also allowed. For this reason, T2K was searching for an appearance of electron neutrino in the muon neutrino beam. The T2K announcement was based on the detection of 6 electron neutrino events, versus about 2 expected from background. At the time some of us wondered why they put such a spin on a merely 2.5 sigma excess, given that neutrino experiments had already produced many confusing results with similar or larger significance (LSND, MiniBoone, later OPERA). After all, neutrino beam experiments are plagued by difficult systematic uncertainties which are due to our incomplete understanding of the dirty hadronic physics involved in the beam production. Indeed, the subsequent results from MINOS turned out to disfavor the T2K central value of sin^2(2 theta13) of about 0.11.
In hindsight, the T2K press release was a pioneering step in data interpretation and the gauntlet was recently picked up Double Chooz. The latter experiment is targeting the transformation of anti-electron neutrinos into other type which, at short distances, is also controlled by the theta13 angle. More precisely, Double Chooz is looking for disappearance of MeV anti-electron neutrinos at a distance of 1 km away from the French nuclear reactor Chooz B where the antineutrinos are produced. They observe a small deficit of events in the energy range 2-5 MeV compared to the no-oscillation hypothesis, see the picture. While T2K was spinning a less-than-3-sigma excess, the Double Chooz press release made a further bold step and presented a less-than-2-sigma one as an evidence. There is still a long way to adapt the standards used in psychology and behavioral sciences. But, little by little, this approach could be applied to wider areas of physics, especially to high energy physics which suffers from dearth of discoveries. Just think of it: if we could call a 2 sigma excess an indication then every week the LHC could deliver an indication of new physics!
But, seriously... I also expect that the value of theta13 is non-zero and the experiments may be seeing the first hint of it. One argument is that global fits to the neutrino oscillation data point to sin^2(2 theta13) = 0.05 and 3 sigma away from zero. Besides, there is no compelling theoretical reason why theta13 should be zero (and if you believe in anarchy there is a reason to the contrary). The smoke should clear up in the next few years thanks to Double Chooz, Daya Bay, NOvA, and others. However the current experimental situation is far from being conclusive and the latest Double Chooz results did not change much in this respect, as can be seen in the fit to the right. I guess this could have been said without diminishing the importance of Double Chooz, and without treating the public as retarded...
See the web page of Double Chooz and this post on Quantum Diaries for more details.
Last Wednesday I noticed this press release from Double Chooz which announced "the observation of the disappearance of (anti-)neutrinos in the expected flux observed from the nuclear reactor" which implies "complementary and important evidence of oscillation also involving the third angle". Wow, I thought, they've nailed down theta13! But it turned out to be much more exciting than just another fundamental parameter. A more careful reading reveals that, based on the first 100 days of data, Double Chooz found sin^2(2 theta13) = 0.085 ± 0.051. Clearly something interesting is going on. To an untrained eye, the Double Chooz result is... consistent with zero; moreover it is similar, even if slightly less precise, to the null result from MINOS: sin^2(2 theta13) = 0.04 ± 0.04. However now in the 21st century one needs a more inspired approach to statistics...
To better understand what's going on, go back a few months. In June this year the T2K experiment also issued a press release about theta13, announcing "an indication that muon neutrinos are able to transform into electron neutrino". T2K is an experiment in Japan where a beam of muon neutrinos with GeV energies is produced in J-PARC and sent over a 300km tunnel ;-) to the SuperKamiokande detector. It is established that muon neutrinos can oscillate into tau neutrinos, the process being governed by the theta23 angle in the MNS neutrino mixing matrix whose value is close ot 45 degrees. If the angle theta13 in that same matrix is non-zero then the process ν_μ → ν_e is also allowed. For this reason, T2K was searching for an appearance of electron neutrino in the muon neutrino beam. The T2K announcement was based on the detection of 6 electron neutrino events, versus about 2 expected from background. At the time some of us wondered why they put such a spin on a merely 2.5 sigma excess, given that neutrino experiments had already produced many confusing results with similar or larger significance (LSND, MiniBoone, later OPERA). After all, neutrino beam experiments are plagued by difficult systematic uncertainties which are due to our incomplete understanding of the dirty hadronic physics involved in the beam production. Indeed, the subsequent results from MINOS turned out to disfavor the T2K central value of sin^2(2 theta13) of about 0.11.
In hindsight, the T2K press release was a pioneering step in data interpretation and the gauntlet was recently picked up Double Chooz. The latter experiment is targeting the transformation of anti-electron neutrinos into other type which, at short distances, is also controlled by the theta13 angle. More precisely, Double Chooz is looking for disappearance of MeV anti-electron neutrinos at a distance of 1 km away from the French nuclear reactor Chooz B where the antineutrinos are produced. They observe a small deficit of events in the energy range 2-5 MeV compared to the no-oscillation hypothesis, see the picture. While T2K was spinning a less-than-3-sigma excess, the Double Chooz press release made a further bold step and presented a less-than-2-sigma one as an evidence. There is still a long way to adapt the standards used in psychology and behavioral sciences. But, little by little, this approach could be applied to wider areas of physics, especially to high energy physics which suffers from dearth of discoveries. Just think of it: if we could call a 2 sigma excess an indication then every week the LHC could deliver an indication of new physics!
But, seriously... I also expect that the value of theta13 is non-zero and the experiments may be seeing the first hint of it. One argument is that global fits to the neutrino oscillation data point to sin^2(2 theta13) = 0.05 and 3 sigma away from zero. Besides, there is no compelling theoretical reason why theta13 should be zero (and if you believe in anarchy there is a reason to the contrary). The smoke should clear up in the next few years thanks to Double Chooz, Daya Bay, NOvA, and others. However the current experimental situation is far from being conclusive and the latest Double Chooz results did not change much in this respect, as can be seen in the fit to the right. I guess this could have been said without diminishing the importance of Double Chooz, and without treating the public as retarded...
See the web page of Double Chooz and this post on Quantum Diaries for more details.
Wednesday, 2 November 2011
Experimental success, theoretical debacle
(This post is an attempt to catch up with October subjects that were being trendy when I was on leave from blogging, even though I suppose no one cares anymore)
This year's Nobel prizes were by all means exceptional. In blatant disregard of noble traditions, the prize in physics was given for a groundbreaking(!) and recent(!!) discovery without omitting any of the key contributors(!!!). Indeed, the discovery of accelerated expansion is one of the greatest triumphs of modern science. The measurements of supernovae brightness in the 90s and subsequent experiments have demonstrated that the universe is currently dominated by a form of energy characterized by negative pressure. In fact, this "dark energy" has the properties of the vacuum energy aka the cosmological constant, first introduced by Einstein for completely wrong reasons. In science, experimental progress usually brings better theoretical understanding. And that's another exceptional thing about the recent Nobel: almost 15 years after, the understanding of the cosmological constant in the context of particle physics models is as good as non-existent.
The cosmological constant problem has been haunting particle physicists for nearly a century now. We know for a fact that all forms of energy gravitate, including the energy contributed by quantum corrections. Thus, we know that diagrams with a graviton coupled to matter loops, like the one in the upper picture, yield a non-vanishing contribution to scattering amplitudes. On the other hand, the sum of very similar diagrams with graviton coupled to matter loops in vacuum must be nearly zero, otherwise the approximate Minkowski vacuum in which we live in would be destabilized. The contribution of the electron loop alone (the lower picture) is about 50 orders of magnitude larger than the experimental limit. On top of that, there should be classical contributions to the vacuum energy, for example from the QCD condensate and from the Higgs potential, which are also naturally tens of orders of magnitude larger than the limit.
The usual attitude in theory is that when something is predicted infinite one assumes it must be zero, and that was a good enough approach before 1998. The discovery of accelerated expansion was a game-changer, because it experimentally proved that the vacuum energy is real and affects the cosmological evolution, therefore the problem can no longer be swiped under the carpet. In fact, the problems is now double. Not only we need to understand why the cosmological constant takes a highly unnatural value from the point of view of the effective low-energy theory (the old cosmological constant problem), but we need to understand why it is of the same order as the matter energy density today (the coincidence problem).
Neither the first nor the second problem has found a satisfactory solution to date. Not for a lack of trying. People have attacked the problem via IR and/or UV modifications of gravity, quintessence fields, self-tuning or attractor solutions, fancy brane configurations in extra dimensions, elephants standing on turtles, space-time wormholes, etc, see also the comment section for crazier examples. In vain, all these solutions either rely on theoretically uncontrollable assumptions, or they just shift the problem somewhere else. The situation remains so dramatic that there are 2 only solutions that are technically correct:
Maybe theory needs another clue that may be provide by one of the future experiments. The Planck satellite will publish an update on cosmological parameters in 2013, although the rumor is that there won't be any revolution. In the asymptotic future there is ESA's Euclid satellite who will precisely measure the distribution of dark matter and dark energy in the universe. Will I live to see the day when the problem is solved? My bet is that no, but I'd love to proven wrong...
For the best summary of the cc problem read Section 1 of Polchinki's review.
This year's Nobel prizes were by all means exceptional. In blatant disregard of noble traditions, the prize in physics was given for a groundbreaking(!) and recent(!!) discovery without omitting any of the key contributors(!!!). Indeed, the discovery of accelerated expansion is one of the greatest triumphs of modern science. The measurements of supernovae brightness in the 90s and subsequent experiments have demonstrated that the universe is currently dominated by a form of energy characterized by negative pressure. In fact, this "dark energy" has the properties of the vacuum energy aka the cosmological constant, first introduced by Einstein for completely wrong reasons. In science, experimental progress usually brings better theoretical understanding. And that's another exceptional thing about the recent Nobel: almost 15 years after, the understanding of the cosmological constant in the context of particle physics models is as good as non-existent.
The cosmological constant problem has been haunting particle physicists for nearly a century now. We know for a fact that all forms of energy gravitate, including the energy contributed by quantum corrections. Thus, we know that diagrams with a graviton coupled to matter loops, like the one in the upper picture, yield a non-vanishing contribution to scattering amplitudes. On the other hand, the sum of very similar diagrams with graviton coupled to matter loops in vacuum must be nearly zero, otherwise the approximate Minkowski vacuum in which we live in would be destabilized. The contribution of the electron loop alone (the lower picture) is about 50 orders of magnitude larger than the experimental limit. On top of that, there should be classical contributions to the vacuum energy, for example from the QCD condensate and from the Higgs potential, which are also naturally tens of orders of magnitude larger than the limit.
The usual attitude in theory is that when something is predicted infinite one assumes it must be zero, and that was a good enough approach before 1998. The discovery of accelerated expansion was a game-changer, because it experimentally proved that the vacuum energy is real and affects the cosmological evolution, therefore the problem can no longer be swiped under the carpet. In fact, the problems is now double. Not only we need to understand why the cosmological constant takes a highly unnatural value from the point of view of the effective low-energy theory (the old cosmological constant problem), but we need to understand why it is of the same order as the matter energy density today (the coincidence problem).
Neither the first nor the second problem has found a satisfactory solution to date. Not for a lack of trying. People have attacked the problem via IR and/or UV modifications of gravity, quintessence fields, self-tuning or attractor solutions, fancy brane configurations in extra dimensions, elephants standing on turtles, space-time wormholes, etc, see also the comment section for crazier examples. In vain, all these solutions either rely on theoretically uncontrollable assumptions, or they just shift the problem somewhere else. The situation remains so dramatic that there are 2 only solutions that are technically correct:
- The anthropic principle: the cosmological constant is an environmental quantity that takes different values in different patches of the universe, however more-or-less intelligent observers can see only those tiny patches where it is unnaturally small.
- The misanthropic principle: the cosmological constant is being adjusted manually by seven invisible dwarfs wearing red hats.
Maybe theory needs another clue that may be provide by one of the future experiments. The Planck satellite will publish an update on cosmological parameters in 2013, although the rumor is that there won't be any revolution. In the asymptotic future there is ESA's Euclid satellite who will precisely measure the distribution of dark matter and dark energy in the universe. Will I live to see the day when the problem is solved? My bet is that no, but I'd love to proven wrong...
For the best summary of the cc problem read Section 1 of Polchinki's review.
Subscribe to:
Posts (Atom)