What a year it was! It started with an earthquake, followed by rising tension. In the previous years I had to think hard to make up a decent blogging subject, this year it was enough to just sit and wait for the next breaking news. Here's a pick of the most important events in the particle world in the year 2011.
January: CDF finds new physics in tops
Measurements of the forward-backward asymmetry of the top pair production at the Tevatron have been showing interesting hints of new physics for a long time. CDF found that this asymmetry displays a very steep dependence of the energy of the colliding partons (that is on the invariant mass of the top pair) which represents a 3.4σ departure from the Standard Model predictions. Unfortunately, later on D0 did not confirm the steep mass dependence, and we need to wait more for the matter to be clarified.
April: CDF gives us goosebumps
CDF studied the invariant mass spectrum of jet pairs produced in association with a W boson, and found a bump near the 150 GeV dijet mass. After the subsequent update, the significance of the bump exceeded 4σ. It would be a clear evidence of new physics if not for the evil D0 who did not find any bump in their data. The origin of the bump will probably remain a mystery forever, much like the Roswell incident, because the Tevatron management is not very determined to resolve it.
April: Xenon100 does not find dark matter
Dark matter searches gain sensitivity at an impressive pace. Xenon100, currently the most sensitive detector for the vanilla dark matter particles, published the results based on the first 100 days of running. No luck so far. As a consolation, for each experiment that does not find dark matter there is one that does; this year CRESST joined the latter club, while CoGeNT announced an annual modulation of its detection rate. In any case, the hunt continues, it will take several more years before we may conclude we're looking in the wrong place.
April: Red Higgs Alarm
It was the first Higgs alarm this year: an internal ATLAS memo leaked to a blog was claiming a 4σ evidence for a 115 GeV Higgs in the γγ channel. The second time the alarm buzzed was after the EPS conference, when the unpublished combination of ATLAS and CMS was showing nearly 4σ evidence for a Higgs around 145 GeV. In both cases the evidence was quickly washed away by more data from the LHC. One positive thing we learned is that in the digital era collaboration secrecy is moot :-)
Summer: LHC bites deep into new territory
Around the EPS and Lepton-Photon conferences the LHC presented a number of new physics searches based on 1 inverse femtobarn of data. Alas, nothing exciting was found. SUSY, at least in its simplest incarnations, is being pushed above 1 TeV. And so are extra dimensions, technicolor, Z primes, dijet resonances, and any broader scenario that is not stealthy by design. Cold sweat is running down the backs of particle theorists...
September: OPERA goes superluminal
The OPERA collaboration stunned the world announcing that neutrinos produced at CERN arrive to their detector in Gran Sasso about 60 nanoseconds earlier than expected if they traveled at the speed of light. However the physics community remains unconvinced. There is indeed a number of very strong phenomenological and theoretical arguments that the OPERA result cannot be right. What do I think? At first I thought the pulse shape was the culprit, but that criticism was subsequently addressed by repeating the measurement with 3 nanosecond pulses. Now I think they should patent their set-up as a GPS synchronization tool :-)
September: US pulls the plug on Tevatron
It was like seeing an old dog put to death: you know it's for the better, but still it feels so sad. Of course, the Tevatron has not said the final word yet; we're still waiting for the collaborations to analyze the full data set. But the number of searches where the Tevatron can compete with the LHC is shrinking rapidly...
November: LHCb finds CP violation in charm
LHCb found that decays of D mesons into π+π- and K+K- violate the CP symmetry (at least one of them). We have seen CP violation in the K and B meson sectors, but the LHCb result was a big surprise: there was a lore that discovering CP violation in the charm meson sector at the 1% level would be a clear sign of new physics (although that is no longer so clear). Whatever it is, we learned something new about the world...
December: LHC glimpses Higgs
Higgs has been finally cornered and we think we're seeing the tip of his hat. Both ATLAS and CMS observe an excess of events in the γγ, ZZ and WW channels consistent with a Higgs boson of mass around 125 GeV. To be continued and ultimately resolved in 2012.
So raise your glasses, this one's over. According to the Mayans, the year 2o12 will be a bit shorter, but it shouldn't be less eventful ;-)
Saturday, 31 December 2011
Monday, 19 December 2011
Sleeping with the Higgs
All my life the Higgs was sort of a legend. Something like a unicorn: you know well how it's supposed to look like but you don't seriously expect to see it. Since last week it feels different. Of course, everybody from the CERN DG to common bloggers justly calls for caution and not jumping to conclusions. Of course, we've seen much larger excesses go away. Of course, there is statistics, likelihoods, look-elsewhere effects, and other attributes of civilization. But there's also the good old hunch. The latter, after digesting the totality of ATLAS and CMS findings, tells me there's about a 90% chance that the Higgs exists in the mass range 124-126 GeV. This high estimate follows from the remarkable consistency of the signal between ATLAS and CMS, and between different search channels. Moreover, that mass neatly fits into the ballpark suggested by electroweak precision fits. Last not least, most of the remaining Higgs mass range is either excluded or disfavored, except maybe around 119 GeV. In other words, I don't care much about the look-elsewhere correction because there isn't any elsewhere to speak of.
Thus, I'm slowly getting used to the idea of the Higgs becoming flesh and bones. On one hand this gives me thrills, as one of the greatest mystery of the universe is being solved in front of our eyes. At the same time I can see dark clouds at the horizon. There has been a well-founded hope that the Higgs would show some unexpected properties, thus opening us the doors to new physics. In the end, it would not make sense if a theoretical concept put forward 40+ years ago showed up in Nature in precisely the predicted form, would it? Yet what ATLAS and CMS are seeing looks dangerously close to the Standard Model Higgs: the signal is showing up everywhere it should, and with roughly the size it should. Of course (now this is a serious of course) we're in no position yet to make any quantitative statements about the properties of the Higgs. Indeed, measuring the couplings of the Higgs to matter will be the clue of the experimental particle physics program for the next 20 years. The more precisely we'll measure these couplings, the bigger chance there is to catch a glimpse of new physics. Still, it is getting more likely than ever that the Standard Model is the correct description of physics at the TeV energies. This is dubbed the nightmare scenario; in the first place a nightmare for particle theorists who become expendable, but in a 30 years perspective also a nightmare for the entire particle physics program. Before the start of the LHC I was giving the nightmare scenario a 50% chance. It's more than that now. Once the Higgs is formally discovered and it fits the Standard Model one... I'm getting cold shivers just thinking about it...
PS. Regardless, isn't it a beauty ;-)
Thus, I'm slowly getting used to the idea of the Higgs becoming flesh and bones. On one hand this gives me thrills, as one of the greatest mystery of the universe is being solved in front of our eyes. At the same time I can see dark clouds at the horizon. There has been a well-founded hope that the Higgs would show some unexpected properties, thus opening us the doors to new physics. In the end, it would not make sense if a theoretical concept put forward 40+ years ago showed up in Nature in precisely the predicted form, would it? Yet what ATLAS and CMS are seeing looks dangerously close to the Standard Model Higgs: the signal is showing up everywhere it should, and with roughly the size it should. Of course (now this is a serious of course) we're in no position yet to make any quantitative statements about the properties of the Higgs. Indeed, measuring the couplings of the Higgs to matter will be the clue of the experimental particle physics program for the next 20 years. The more precisely we'll measure these couplings, the bigger chance there is to catch a glimpse of new physics. Still, it is getting more likely than ever that the Standard Model is the correct description of physics at the TeV energies. This is dubbed the nightmare scenario; in the first place a nightmare for particle theorists who become expendable, but in a 30 years perspective also a nightmare for the entire particle physics program. Before the start of the LHC I was giving the nightmare scenario a 50% chance. It's more than that now. Once the Higgs is formally discovered and it fits the Standard Model one... I'm getting cold shivers just thinking about it...
PS. Regardless, isn't it a beauty ;-)
Tuesday, 13 December 2011
Visual on Higgs
Today CERN presented the latest Higgs search results. But, first things first. The live podcast of the seminar was stalling, choking, or breaking down completely. Well into the 21st century a huge international lab doing cutting edge science (and boasting of inventing the world wide web) is unable to provide a smooth transmission of its most important public presentation of the year. One may call it ironic, or repeat Didier Drogba's famous words after the 2009 Champions League semi-finals.
OK, after venting my anger I can talk some physics.
OK, after venting my anger I can talk some physics.
- The main news, correctly rumored on blogs before, is that there is a significant excess of Higgs-like events corresponding to the Higgs mass of ~125 GeV. More precisely, the local significance of the ATLAS excess is 3.6σ, or 2.5σ sigma when the "look-elsewhere effect" in the 110-146 GeV mass range is taken into account. For the CMS the significance is somewhat smaller: 2.6/1.9 with/without the look-elsewhere effect. Separately, these excesses would be shrugged off; combined, they are very suggestive that we're seeing the real thing at last.
- In ATLAS, only the H→γγ and H→ZZ*→2l+2l- channel have been updated with the full 2011 dataset. Those are the ones where statistics is limited due to the small Higgs branching fractions. However, in these final states the 4-momentum of the Higgs can be fully reconstructed to a good precision, offering a very good mass resolution of order 2 GeV. CMS updated all main channels, also those that do not provide a lot of mileage near 125 GeV at this point.
- The excess is seen by both experiments and in each of these channels. The excess in H→γγ peaks around 124 GeV it CMS, and around 126 GeV in ATLAS, which I guess is perfectly consistent within resolution. In the 4-lepton channel, ATLAS has 3 events just below 125 GeV, while CMS has 2 events just above 125 GeV. On top of that there's the long-standing excess in the H→WW*→l+l-2ν channel, which however is not the driving force anymore. It's is precisely this overall consistency that makes the signal so tantalizing.
- ATLAS observes slightly more events than expected from the Standard Model Higgs at that mass. In ATLAS the best fit to the Higgs cross section corresponds to roughly 1.5 the Standard Model value. Thus, one way or another, the 125 GeV thing is a fluke. It may have been good luck if these events are due to Higgs, or bad luck if they are due to other Standard Model backgrounds...
- CMS now excludes Higgs down to 127 GeV. The ATLAS limits slightly worse around 130 GeV, but thanks to lucky background fluctuations they happened to exclude the low mass region between 112.7 and 115.5 GeV. The latter is a spectacular confirmation that what the ALEPH experiment saw back in 2001 was a genuine fundon (fundons are elementary particles produced in high-energy collider near the end of the budgetary cycle).
Monday, 12 December 2011
Tinker Taylor Soldier Higgs
Tomorrow the ATLAS and CMS experiments will reveal new results of the Higgs search based on the full 2011 dataset. The date of the presentation was not chosen accidentally: December 13 is the anniversary of the capture of Saddam Hussein. Is this suggesting that tomorrow we'll see another notorious fugitive dragged out of its hole?
Two things are 100% certain because they appeared in an official statement from CERN:
You're of course welcome to fill in more details or paste an excerpt of the ATLAS or CMS draft into the comment section ;-)
Two things are 100% certain because they appeared in an official statement from CERN:
- Neither experiment will announce the discovery of the Higgs, in the sense of a signal with a significance of 5 sigma.
- Neither experiment will exclude the Standard Model Higgs over the whole low-mass range.
- The Standard Model Higgs boson is excluded down to approximately 130 GeV, but not below.
- As already reported widely on blogs, both experiments have an excess of events consistent with the Higgs particle of mass around 125 GeV.
- The excess is larger at ATLAS, where it is driven by the H→γγ channel, and supported by 3 events reconstructed in the H→ZZ*→4l channel at that mass. The combined significance is around 3 sigma, the precise number depending on statistical methods used, in particular on how one includes the look-elsewhere-effect.
- CMS has a smaller excess at 125 GeV, mainly in the H→γγ channel. They have 3 events in H→4l as well, but they are oddly shifted to somewhat lower masses of order 119 GeV. All in all, the significance at 125 GeV in CMS is only around 2 sigma.
- With some good faith, one could cherish other 2-sigmish bumps in the γγ channel, notably around 140 GeV. Those definitely cannot be the signal of the Standard Model Higgs, but could well be due to Higgs-like particles in various extensions of the Standard Model.
You're of course welcome to fill in more details or paste an excerpt of the ATLAS or CMS draft into the comment section ;-)
Friday, 2 December 2011
Update on CP violation in charm
This post came out a bit cryptic. Don't even start if you're not a huge fan of flavor physics.
There's been some new developments since 2 weeks ago when LHCb came out with the evidence of CP violation in the charm meson sector. Recall that LHCb studied the D-meson decay to π+π- and K+K- meson pairs. The observable of interest was the asymmetry of D- and anti-D-meson decay rates to these final states, A(π+π-) and A(K+K-). A non-zero value of any of these two asymmetries signals CP violation (particle and anti-particle decay rates to the same CP eigenstate being different). LHCb found that the difference ΔA = A(K+K-) - A(π+π-) is (-0.82±0.24)%, which means that, with large confidence, at least one of these asymmetries is non-zero.
The most pressing question is whether the LHCb result implies new physics? Experts emphatically agree that the answer is definitely maybe. From the existing literature one can learn that "...CP violation from new physics must be playing a role if an asymmetry is observed with present experimental sensitivities O(1%)", and "...observation of CP violation in the decay of D mesons will not necessarily be a signal of new physics...". Now, a paper from last week makes a reassessment of the Standard Model predictions and clarifies the LHCb result may or may not signal new physics.
The problem is that we are lacking reliable methods to compute processes involving D mesons. Normally, when dealing with heavy flavored mesons, one employs an effective theory where the heavy Standard Model and possibly new physics particles have been integrated out, leaving effective 4-quark interactions. This allows one to compute observables at the leading order in the expansion in powers of (1 GeV/m_quark), the former being the typical scale for QCD effects, and the latter the suppression scale of higher-dimensional operators. That is a decent expansion parameter for bottom quarks, but not so much for charm quarks. Now, ignoring that convergence issue and taking the leading order predictions at face value one arrives at the estimate ΔA∼0.1%, well below the value measured by LHCb. The new paper by Brod et al. attempts to estimate the contribution of the higher order operators to the asymmetry, using some guidance from other experimental data on D-mesons. For example, one finds that the branching fraction for D0 → K0 K0bar, which receives contributions only at next-to-leading order in 1/mc, is about five times smaller than the branching fraction of D0 → K+ K-, which received leading order contributions. That means that the respective amplitudes differ only by a factor of two, which in turn proves that the higher order 1/mc contributions can be significant. Taking that into account, the paper concludes that the Standard Model can account for |ΔA| as large as 0.4%, uncomfortably close to the LHCb measurement.
Another last week paper takes a less pessimistic approach. It simply assumes that the asymmetry measured by LHCb is dominated by new physics and attempts to understand constraints on the underlying model. In the language of effective theory, to explain the LHCb result one needs to include a ΔC =1 four-quark operator (with 1 charm and 3 light quarks, [sbar Γ c][ubar Γ s]). There are many possibilities differing by the structure of Lorentz and color indices that lead to the same observable asymmetry. The scale suppressing this higher dimensional operator should be of order 10 TeV to match to the LHCb result. This is actually a very small suppression. From non-observation of CP violation in D meson mixing we know that the scale suppressing generic ΔC=2 4-quark operators (with 2 charm and 2 light quarks) must be close to 10 000 TeV. On the other hand, once a ΔC=1 operator is present, ΔC=2 ones will necessarily be generated by loop corrections. This means that a random new physics model explaining the LHCb result will be in conflict with the data on D-meson mixing. However, the paper by Isidori et al. concludes that a subset of the ΔC=1 operators explaining the LHCb asymmetry is consistent with all other experimental data. In particular, the ΔC=1 operators involving only right-handed quarks are not excluded by the data on D-meson mixing and on direct CP violation in kaon decays (ε'/ε in the flavor jargon).
Finally, a small experimental update. CDF posted on arXiv a new paper that is similar to the earlier public note but contains one bonus track. The known result is the separate CP asymmetries D-meson decays to pions and kaons:
There's been some new developments since 2 weeks ago when LHCb came out with the evidence of CP violation in the charm meson sector. Recall that LHCb studied the D-meson decay to π+π- and K+K- meson pairs. The observable of interest was the asymmetry of D- and anti-D-meson decay rates to these final states, A(π+π-) and A(K+K-). A non-zero value of any of these two asymmetries signals CP violation (particle and anti-particle decay rates to the same CP eigenstate being different). LHCb found that the difference ΔA = A(K+K-) - A(π+π-) is (-0.82±0.24)%, which means that, with large confidence, at least one of these asymmetries is non-zero.
The most pressing question is whether the LHCb result implies new physics? Experts emphatically agree that the answer is definitely maybe. From the existing literature one can learn that "...CP violation from new physics must be playing a role if an asymmetry is observed with present experimental sensitivities O(1%)", and "...observation of CP violation in the decay of D mesons will not necessarily be a signal of new physics...". Now, a paper from last week makes a reassessment of the Standard Model predictions and clarifies the LHCb result may or may not signal new physics.
The problem is that we are lacking reliable methods to compute processes involving D mesons. Normally, when dealing with heavy flavored mesons, one employs an effective theory where the heavy Standard Model and possibly new physics particles have been integrated out, leaving effective 4-quark interactions. This allows one to compute observables at the leading order in the expansion in powers of (1 GeV/m_quark), the former being the typical scale for QCD effects, and the latter the suppression scale of higher-dimensional operators. That is a decent expansion parameter for bottom quarks, but not so much for charm quarks. Now, ignoring that convergence issue and taking the leading order predictions at face value one arrives at the estimate ΔA∼0.1%, well below the value measured by LHCb. The new paper by Brod et al. attempts to estimate the contribution of the higher order operators to the asymmetry, using some guidance from other experimental data on D-mesons. For example, one finds that the branching fraction for D0 → K0 K0bar, which receives contributions only at next-to-leading order in 1/mc, is about five times smaller than the branching fraction of D0 → K+ K-, which received leading order contributions. That means that the respective amplitudes differ only by a factor of two, which in turn proves that the higher order 1/mc contributions can be significant. Taking that into account, the paper concludes that the Standard Model can account for |ΔA| as large as 0.4%, uncomfortably close to the LHCb measurement.
Another last week paper takes a less pessimistic approach. It simply assumes that the asymmetry measured by LHCb is dominated by new physics and attempts to understand constraints on the underlying model. In the language of effective theory, to explain the LHCb result one needs to include a ΔC =1 four-quark operator (with 1 charm and 3 light quarks, [sbar Γ c][ubar Γ s]). There are many possibilities differing by the structure of Lorentz and color indices that lead to the same observable asymmetry. The scale suppressing this higher dimensional operator should be of order 10 TeV to match to the LHCb result. This is actually a very small suppression. From non-observation of CP violation in D meson mixing we know that the scale suppressing generic ΔC=2 4-quark operators (with 2 charm and 2 light quarks) must be close to 10 000 TeV. On the other hand, once a ΔC=1 operator is present, ΔC=2 ones will necessarily be generated by loop corrections. This means that a random new physics model explaining the LHCb result will be in conflict with the data on D-meson mixing. However, the paper by Isidori et al. concludes that a subset of the ΔC=1 operators explaining the LHCb asymmetry is consistent with all other experimental data. In particular, the ΔC=1 operators involving only right-handed quarks are not excluded by the data on D-meson mixing and on direct CP violation in kaon decays (ε'/ε in the flavor jargon).
Finally, a small experimental update. CDF posted on arXiv a new paper that is similar to the earlier public note but contains one bonus track. The known result is the separate CP asymmetries D-meson decays to pions and kaons:
A(K+K-) = -0.24 ± 0.24 A(π+π-) = 0.22 ± 0.26
The bonus is that they did the subtraction and also present the difference of the asymmetries:
ΔA = -0.46 ± 0.33
which can be directly compared to the LHCb result. The CDF asymmetry difference is perfectly consistent with the LHCb one, and it's also consistent with zero. This is a hint that the true asymmetry difference is probably at a lower end of the range suggested by the LHCb measurement, and therefore closer to the Standard Model prediction. One more point for the Standard Model, but the game is not over yet.
Friday, 18 November 2011
New Higgs combination is out
Here is a brief report about the long-awaited combination of the ATLAS and CMS Higgs search results using about 2 inverse femtobarn of data collected last summer. Overshadowed by faster-than-light neutrinos, that result was presented today at the HCP conference in Paris. It had been expected with as much thrill as results of parliamentary elections in the former Soviet Union. Indeed, in this fast-moving world 2 months ago is infinite past. Today particle physicists are rather busy rumoring the results based on the entire LHC data set of 5 inverse femtobarn. Moreover, the combined limits had not been difficult to guess, and a reasonable approximation of the official combination had been long available via viXra log.
Nevertheless, it's the chronicler's duty to report: the Standard Model Higgs is excluded at 95% confidence level for all masses between 141 GeV and 476 GeV.
Meanwhile, ATLAS and CMS have already had the first look at the full data set. Continuing the Soviet analogy, an uneasy rumor is starting among the working class and the lower-ranked party officials. Is the first secretary dead? Or on life support? Or, if he's all right, why he's not showing in public? We expect an official update for the 21st Congress of the Communist Party, sorry, the December CERN Council week. And wild speculations on Twitter well before that :-)
See the public note for more details about the combination.
Nevertheless, it's the chronicler's duty to report: the Standard Model Higgs is excluded at 95% confidence level for all masses between 141 GeV and 476 GeV.
Meanwhile, ATLAS and CMS have already had the first look at the full data set. Continuing the Soviet analogy, an uneasy rumor is starting among the working class and the lower-ranked party officials. Is the first secretary dead? Or on life support? Or, if he's all right, why he's not showing in public? We expect an official update for the 21st Congress of the Communist Party, sorry, the December CERN Council week. And wild speculations on Twitter well before that :-)
See the public note for more details about the combination.
Monday, 14 November 2011
LHCb has evidence of new physics! Maybe.
It finally happened: we have the first official claim of new physics at the LHC. Amusingly, it comes not from ATLAS or CMS, but from LHCb, a smaller collaboration focused on studying processes with hadrons formed by b- and c-quarks. Physics of heavy quark flavors is a subject for botanists and, if I only could, I would never mention it on this blog. Indeed, a mere thought of the humongous number of b- and c-hadrons and of their possible decay chains gives me migraines. Moreover, all this physics is customarily wrapped up in a cryptic notation such that only the chosen few can decipher the message. Unfortunately, one cannot completely ignore flavor physics because it may be sensitive to new particles beyond the Standard Model, even very heavy ones. This is especially true for CP-violating observables because, compared to small Standard Model contributions, new physics contributions may easily stand out.
So, the news of the day is that LHCb observed direct CP violation in neutral D-meson decays. More precisely, using 0.58 fb-1 of data they measured the difference of time-integrated CP asymmetries of D→ π+π- and D→ K+K- decays. The result is
3.5 sigma away from the Standard Model prediction which is approximately zero!
Here is an explanation in a slightly more human language:
So, the news of the day is that LHCb observed direct CP violation in neutral D-meson decays. More precisely, using 0.58 fb-1 of data they measured the difference of time-integrated CP asymmetries of D→ π+π- and D→ K+K- decays. The result is
3.5 sigma away from the Standard Model prediction which is approximately zero!
Here is an explanation in a slightly more human language:
- Much like b-quarks, c-quarks can form relatively long-lived mesons (quark-antiquark bound states) with lighter quarks. Since mesons containing a b-quark are called B-mesons, those containing a c-quark are, logically, called D-mesons. Among these are 2 electrically neutral mesons: D0 = charm + anti-up quark, and D0bar = anti-charm + up cbar-u quark. CP symmetry relates particles and anti-particles, in this case it relates D0 and D0bar. Note that D0 and D0bar mix, that is they can turn into one another; this is an important and experimentally established phenomenon which in general may be related to CP violation however in the present story it plays a lesser role .
- D-mesons are produced at the LHC with a huge cross-section of a few milibarns. LHCb is especially well equipped to identify and study them. In particular, they can easily tell kaons from pions thanks to their Cherenkov sub-detector.
- Here we are interested in D mesons decays to a CP invariant final state f+f- where f = π,K. Thus, the D0 → f+f- and D0bar → f+f- processes are related by a CP transformation, and we can define the CP asymmetry as
If CP was an exact symmetry of the universe, the asymmetries defined above would be zero: the decay probabilities into pions/kaons of D0 and D0bar would be the same. The Standard Model does violate CP, however its contributions are estimated to be very small in this case, as I explain in the following. - At the Tevatron and B-factories they measured separate measurements of the asymmetries A_CP(π+π-) and A_CP(K+K-) (obtaining results consistent with zero). LHCb quotes only the difference A_CP(K+K-) - A_CP(π+π-) because, at a proton-proton collider, the D0 and D0bar mesons are produced at a different rate. That introduces a spurious asymmetry at the detection level which, fortunately, cancels out in the difference. Besides, the mixing contribution to the asymmetry approximately cancels out in the difference as well. Thus, the observable measured by LHCb is sensitive to so-called direct CP violation (as opposed to indirect CP violation that proceeds via meson-antimeson mixing).
- LHCb has collected 1.1 inverse femtobarn (fb-1) of data, 5 times less than ATLAS and CMS, because the LHCb detector cannot handle as large luminosity. The present analysis uses a half of the available data set. The error of the measurement is still dominated by statistics, so analyzing the full data set will shrink the error by at least Sqrt[2].
- What does the good old Standard Model has to say about these asymmetries? First of all, any CP asymmetry has to arise from interference between 2 different amplitudes entering with different complex phases. In the Standard Model the 2 dominant amplitudes are:
#1: Tree-level weak decay amplitude. The pictured amplitude involves the CKM matrix elements V_us and V_cs, therefore it is suppressed by one power of Cabibbo angle, the parameter whose approximate value is 0.2.
#2: One-loop amplitude which, for reasons that should be kept secret from children, is called the penguin. Again it involves the CKM matrix elements V_us and V_cs, and also a loop suppression factor α_strong/π. However, as is well known, any CP violation in the Standard Model has to involve the 3rd generation quarks, in this case a virtual b-quark in the loop entering via V_cb and V_ub CKM matrix elements.
The corresponding D0 → π+π- amplitudes are of the same order of magnitude. - All in all, the direct CP asymmetry in the D0 → π+π- and D0 → K+K- is parametrically proportional to (α_strong/π) (Vcb*Vub)/(Vus*Vcs) which is suppressed by the 4-th power of the Cabibbo angle and a loop factor. This huge suppression factor leads to an estimate of the Standard Model contribution to the CP asymmetry at the level of 0.01-0.1%. On the other hand, LHCb finds a much larger magnitude of the asymmetry, of order 1%.
- Is it obviously new physics? Experts are not sure because D-mesons are filthy bastards. With the masses around 2 GeV, they sit precisely in the no man's land between perturbative QCD (valid at energies >> GeV) and low-energy chiral perturbation theory (valid between 100 MeV and 1 GeV). For this reason, making precise Standard Model predictions in the D-meson sector is notoriously difficult. It might well be that the above estimates are too naive, for example the penguin diagram may be enhanced by non-calculable QCD effects by a much-larger-than-expected factor.
- And what is it if it indeed is new physics beyond the Standard Model? This was definitely not the most expected place where theorists had expected new physics to show up. Currently there are almost no models on the market that predict CP violation in D0 decays without violating other constraints. I'm aware of one that uses squark-gluino loops to enhance the penguin, let me know about other examples. This gap will surely be filled in the coming weeks, and I will provide an update once new interesting examples are out.
Friday, 11 November 2011
Double Dare
(There's nothing like a little rant on a holiday morning)
Last Wednesday I noticed this press release from Double Chooz which announced "the observation of the disappearance of (anti-)neutrinos in the expected flux observed from the nuclear reactor" which implies "complementary and important evidence of oscillation also involving the third angle". Wow, I thought, they've nailed down theta13! But it turned out to be much more exciting than just another fundamental parameter. A more careful reading reveals that, based on the first 100 days of data, Double Chooz found sin^2(2 theta13) = 0.085 ± 0.051. Clearly something interesting is going on. To an untrained eye, the Double Chooz result is... consistent with zero; moreover it is similar, even if slightly less precise, to the null result from MINOS: sin^2(2 theta13) = 0.04 ± 0.04. However now in the 21st century one needs a more inspired approach to statistics...
To better understand what's going on, go back a few months. In June this year the T2K experiment also issued a press release about theta13, announcing "an indication that muon neutrinos are able to transform into electron neutrino". T2K is an experiment in Japan where a beam of muon neutrinos with GeV energies is produced in J-PARC and sent over a 300km tunnel ;-) to the SuperKamiokande detector. It is established that muon neutrinos can oscillate into tau neutrinos, the process being governed by the theta23 angle in the MNS neutrino mixing matrix whose value is close ot 45 degrees. If the angle theta13 in that same matrix is non-zero then the process ν_μ → ν_e is also allowed. For this reason, T2K was searching for an appearance of electron neutrino in the muon neutrino beam. The T2K announcement was based on the detection of 6 electron neutrino events, versus about 2 expected from background. At the time some of us wondered why they put such a spin on a merely 2.5 sigma excess, given that neutrino experiments had already produced many confusing results with similar or larger significance (LSND, MiniBoone, later OPERA). After all, neutrino beam experiments are plagued by difficult systematic uncertainties which are due to our incomplete understanding of the dirty hadronic physics involved in the beam production. Indeed, the subsequent results from MINOS turned out to disfavor the T2K central value of sin^2(2 theta13) of about 0.11.
In hindsight, the T2K press release was a pioneering step in data interpretation and the gauntlet was recently picked up Double Chooz. The latter experiment is targeting the transformation of anti-electron neutrinos into other type which, at short distances, is also controlled by the theta13 angle. More precisely, Double Chooz is looking for disappearance of MeV anti-electron neutrinos at a distance of 1 km away from the French nuclear reactor Chooz B where the antineutrinos are produced. They observe a small deficit of events in the energy range 2-5 MeV compared to the no-oscillation hypothesis, see the picture. While T2K was spinning a less-than-3-sigma excess, the Double Chooz press release made a further bold step and presented a less-than-2-sigma one as an evidence. There is still a long way to adapt the standards used in psychology and behavioral sciences. But, little by little, this approach could be applied to wider areas of physics, especially to high energy physics which suffers from dearth of discoveries. Just think of it: if we could call a 2 sigma excess an indication then every week the LHC could deliver an indication of new physics!
But, seriously... I also expect that the value of theta13 is non-zero and the experiments may be seeing the first hint of it. One argument is that global fits to the neutrino oscillation data point to sin^2(2 theta13) = 0.05 and 3 sigma away from zero. Besides, there is no compelling theoretical reason why theta13 should be zero (and if you believe in anarchy there is a reason to the contrary). The smoke should clear up in the next few years thanks to Double Chooz, Daya Bay, NOvA, and others. However the current experimental situation is far from being conclusive and the latest Double Chooz results did not change much in this respect, as can be seen in the fit to the right. I guess this could have been said without diminishing the importance of Double Chooz, and without treating the public as retarded...
See the web page of Double Chooz and this post on Quantum Diaries for more details.
Last Wednesday I noticed this press release from Double Chooz which announced "the observation of the disappearance of (anti-)neutrinos in the expected flux observed from the nuclear reactor" which implies "complementary and important evidence of oscillation also involving the third angle". Wow, I thought, they've nailed down theta13! But it turned out to be much more exciting than just another fundamental parameter. A more careful reading reveals that, based on the first 100 days of data, Double Chooz found sin^2(2 theta13) = 0.085 ± 0.051. Clearly something interesting is going on. To an untrained eye, the Double Chooz result is... consistent with zero; moreover it is similar, even if slightly less precise, to the null result from MINOS: sin^2(2 theta13) = 0.04 ± 0.04. However now in the 21st century one needs a more inspired approach to statistics...
To better understand what's going on, go back a few months. In June this year the T2K experiment also issued a press release about theta13, announcing "an indication that muon neutrinos are able to transform into electron neutrino". T2K is an experiment in Japan where a beam of muon neutrinos with GeV energies is produced in J-PARC and sent over a 300km tunnel ;-) to the SuperKamiokande detector. It is established that muon neutrinos can oscillate into tau neutrinos, the process being governed by the theta23 angle in the MNS neutrino mixing matrix whose value is close ot 45 degrees. If the angle theta13 in that same matrix is non-zero then the process ν_μ → ν_e is also allowed. For this reason, T2K was searching for an appearance of electron neutrino in the muon neutrino beam. The T2K announcement was based on the detection of 6 electron neutrino events, versus about 2 expected from background. At the time some of us wondered why they put such a spin on a merely 2.5 sigma excess, given that neutrino experiments had already produced many confusing results with similar or larger significance (LSND, MiniBoone, later OPERA). After all, neutrino beam experiments are plagued by difficult systematic uncertainties which are due to our incomplete understanding of the dirty hadronic physics involved in the beam production. Indeed, the subsequent results from MINOS turned out to disfavor the T2K central value of sin^2(2 theta13) of about 0.11.
In hindsight, the T2K press release was a pioneering step in data interpretation and the gauntlet was recently picked up Double Chooz. The latter experiment is targeting the transformation of anti-electron neutrinos into other type which, at short distances, is also controlled by the theta13 angle. More precisely, Double Chooz is looking for disappearance of MeV anti-electron neutrinos at a distance of 1 km away from the French nuclear reactor Chooz B where the antineutrinos are produced. They observe a small deficit of events in the energy range 2-5 MeV compared to the no-oscillation hypothesis, see the picture. While T2K was spinning a less-than-3-sigma excess, the Double Chooz press release made a further bold step and presented a less-than-2-sigma one as an evidence. There is still a long way to adapt the standards used in psychology and behavioral sciences. But, little by little, this approach could be applied to wider areas of physics, especially to high energy physics which suffers from dearth of discoveries. Just think of it: if we could call a 2 sigma excess an indication then every week the LHC could deliver an indication of new physics!
But, seriously... I also expect that the value of theta13 is non-zero and the experiments may be seeing the first hint of it. One argument is that global fits to the neutrino oscillation data point to sin^2(2 theta13) = 0.05 and 3 sigma away from zero. Besides, there is no compelling theoretical reason why theta13 should be zero (and if you believe in anarchy there is a reason to the contrary). The smoke should clear up in the next few years thanks to Double Chooz, Daya Bay, NOvA, and others. However the current experimental situation is far from being conclusive and the latest Double Chooz results did not change much in this respect, as can be seen in the fit to the right. I guess this could have been said without diminishing the importance of Double Chooz, and without treating the public as retarded...
See the web page of Double Chooz and this post on Quantum Diaries for more details.
Wednesday, 2 November 2011
Experimental success, theoretical debacle
(This post is an attempt to catch up with October subjects that were being trendy when I was on leave from blogging, even though I suppose no one cares anymore)
This year's Nobel prizes were by all means exceptional. In blatant disregard of noble traditions, the prize in physics was given for a groundbreaking(!) and recent(!!) discovery without omitting any of the key contributors(!!!). Indeed, the discovery of accelerated expansion is one of the greatest triumphs of modern science. The measurements of supernovae brightness in the 90s and subsequent experiments have demonstrated that the universe is currently dominated by a form of energy characterized by negative pressure. In fact, this "dark energy" has the properties of the vacuum energy aka the cosmological constant, first introduced by Einstein for completely wrong reasons. In science, experimental progress usually brings better theoretical understanding. And that's another exceptional thing about the recent Nobel: almost 15 years after, the understanding of the cosmological constant in the context of particle physics models is as good as non-existent.
The cosmological constant problem has been haunting particle physicists for nearly a century now. We know for a fact that all forms of energy gravitate, including the energy contributed by quantum corrections. Thus, we know that diagrams with a graviton coupled to matter loops, like the one in the upper picture, yield a non-vanishing contribution to scattering amplitudes. On the other hand, the sum of very similar diagrams with graviton coupled to matter loops in vacuum must be nearly zero, otherwise the approximate Minkowski vacuum in which we live in would be destabilized. The contribution of the electron loop alone (the lower picture) is about 50 orders of magnitude larger than the experimental limit. On top of that, there should be classical contributions to the vacuum energy, for example from the QCD condensate and from the Higgs potential, which are also naturally tens of orders of magnitude larger than the limit.
The usual attitude in theory is that when something is predicted infinite one assumes it must be zero, and that was a good enough approach before 1998. The discovery of accelerated expansion was a game-changer, because it experimentally proved that the vacuum energy is real and affects the cosmological evolution, therefore the problem can no longer be swiped under the carpet. In fact, the problems is now double. Not only we need to understand why the cosmological constant takes a highly unnatural value from the point of view of the effective low-energy theory (the old cosmological constant problem), but we need to understand why it is of the same order as the matter energy density today (the coincidence problem).
Neither the first nor the second problem has found a satisfactory solution to date. Not for a lack of trying. People have attacked the problem via IR and/or UV modifications of gravity, quintessence fields, self-tuning or attractor solutions, fancy brane configurations in extra dimensions, elephants standing on turtles, space-time wormholes, etc, see also the comment section for crazier examples. In vain, all these solutions either rely on theoretically uncontrollable assumptions, or they just shift the problem somewhere else. The situation remains so dramatic that there are 2 only solutions that are technically correct:
Maybe theory needs another clue that may be provide by one of the future experiments. The Planck satellite will publish an update on cosmological parameters in 2013, although the rumor is that there won't be any revolution. In the asymptotic future there is ESA's Euclid satellite who will precisely measure the distribution of dark matter and dark energy in the universe. Will I live to see the day when the problem is solved? My bet is that no, but I'd love to proven wrong...
For the best summary of the cc problem read Section 1 of Polchinki's review.
This year's Nobel prizes were by all means exceptional. In blatant disregard of noble traditions, the prize in physics was given for a groundbreaking(!) and recent(!!) discovery without omitting any of the key contributors(!!!). Indeed, the discovery of accelerated expansion is one of the greatest triumphs of modern science. The measurements of supernovae brightness in the 90s and subsequent experiments have demonstrated that the universe is currently dominated by a form of energy characterized by negative pressure. In fact, this "dark energy" has the properties of the vacuum energy aka the cosmological constant, first introduced by Einstein for completely wrong reasons. In science, experimental progress usually brings better theoretical understanding. And that's another exceptional thing about the recent Nobel: almost 15 years after, the understanding of the cosmological constant in the context of particle physics models is as good as non-existent.
The cosmological constant problem has been haunting particle physicists for nearly a century now. We know for a fact that all forms of energy gravitate, including the energy contributed by quantum corrections. Thus, we know that diagrams with a graviton coupled to matter loops, like the one in the upper picture, yield a non-vanishing contribution to scattering amplitudes. On the other hand, the sum of very similar diagrams with graviton coupled to matter loops in vacuum must be nearly zero, otherwise the approximate Minkowski vacuum in which we live in would be destabilized. The contribution of the electron loop alone (the lower picture) is about 50 orders of magnitude larger than the experimental limit. On top of that, there should be classical contributions to the vacuum energy, for example from the QCD condensate and from the Higgs potential, which are also naturally tens of orders of magnitude larger than the limit.
The usual attitude in theory is that when something is predicted infinite one assumes it must be zero, and that was a good enough approach before 1998. The discovery of accelerated expansion was a game-changer, because it experimentally proved that the vacuum energy is real and affects the cosmological evolution, therefore the problem can no longer be swiped under the carpet. In fact, the problems is now double. Not only we need to understand why the cosmological constant takes a highly unnatural value from the point of view of the effective low-energy theory (the old cosmological constant problem), but we need to understand why it is of the same order as the matter energy density today (the coincidence problem).
Neither the first nor the second problem has found a satisfactory solution to date. Not for a lack of trying. People have attacked the problem via IR and/or UV modifications of gravity, quintessence fields, self-tuning or attractor solutions, fancy brane configurations in extra dimensions, elephants standing on turtles, space-time wormholes, etc, see also the comment section for crazier examples. In vain, all these solutions either rely on theoretically uncontrollable assumptions, or they just shift the problem somewhere else. The situation remains so dramatic that there are 2 only solutions that are technically correct:
- The anthropic principle: the cosmological constant is an environmental quantity that takes different values in different patches of the universe, however more-or-less intelligent observers can see only those tiny patches where it is unnaturally small.
- The misanthropic principle: the cosmological constant is being adjusted manually by seven invisible dwarfs wearing red hats.
Maybe theory needs another clue that may be provide by one of the future experiments. The Planck satellite will publish an update on cosmological parameters in 2013, although the rumor is that there won't be any revolution. In the asymptotic future there is ESA's Euclid satellite who will precisely measure the distribution of dark matter and dark energy in the universe. Will I live to see the day when the problem is solved? My bet is that no, but I'd love to proven wrong...
For the best summary of the cc problem read Section 1 of Polchinki's review.
Thursday, 27 October 2011
What if they don't find the Higgs?
(...No I didn't cut my wrists after the Tevatron shutdown, contrary to what you might have concluded from my blogging history...)
So, the 2011 run of the LHC is coming to a close, I mean the interesting part ;-). A 5 inverse femtobarn stash of data has been collected by each ATLAS and CMS. These data will by fully analyzed and scrutinized by the late winter 2012, while rumors should start popping up on blogs before the end of this year. One thing that is already clear is that new physics did not jump in our faces, which is hardly a surprise. And neither did the Higgs boson, which is more intriguing. Contrary to what I expected, the 2011 data may not yield a conclusive statement about the Higgs: neither a clear cut discovery nor excluding the entire low mass range appears likely at this point. We can now at least entertain the option, which as recently as last summer was unthinkable, that the LHC will not find the Higgs particle with the properties predicted by the Standard Model. What then?
First of all, it will be fun to watch the CERN management explaining the public that *not* discovering the Higgs is a success. For theorists, on the other hand, the best of all worlds will have been granted. In fact, we already have a deck of cards to play for that occasion, each very interesting as each pointing to exciting new physics within our reach. Here are the 3 main broad scenarios (not mutually exclusive):
So, the 2011 run of the LHC is coming to a close, I mean the interesting part ;-). A 5 inverse femtobarn stash of data has been collected by each ATLAS and CMS. These data will by fully analyzed and scrutinized by the late winter 2012, while rumors should start popping up on blogs before the end of this year. One thing that is already clear is that new physics did not jump in our faces, which is hardly a surprise. And neither did the Higgs boson, which is more intriguing. Contrary to what I expected, the 2011 data may not yield a conclusive statement about the Higgs: neither a clear cut discovery nor excluding the entire low mass range appears likely at this point. We can now at least entertain the option, which as recently as last summer was unthinkable, that the LHC will not find the Higgs particle with the properties predicted by the Standard Model. What then?
First of all, it will be fun to watch the CERN management explaining the public that *not* discovering the Higgs is a success. For theorists, on the other hand, the best of all worlds will have been granted. In fact, we already have a deck of cards to play for that occasion, each very interesting as each pointing to exciting new physics within our reach. Here are the 3 main broad scenarios (not mutually exclusive):
- Higgs exists but has a smaller production cross section.
In the Standard Model the Higgs is produced mostly in gluon fusion, via a loop diagram with top quarks. One can easily imagine new particles meddling into Higgs production via a similar loop process; all they need is a color charge and a significant coupling to the Higgs. Thus, in every major new physics scenario modifying the Higgs production rate is possible without stretching the parameters too much. One interesting case is the composite Higgs, where the Higgs cross section is almost always suppressed, typically down to 70-90% of the Standard Model value. For experimentalists this is the simplest scenario, all they need to do is sit and wait a bit longer, and the Higgs will eventually show up. The matter should be sorted out after the 2012 data are analyze. - Higgs exists but has non-standard decays.
For a low mass Higgs, around 120 GeV, the main discovery channel is the decay into 2 photons. Again, this is a loop process in the Standard Model so it's very easy for new physics to modify the branching fraction for that decay. As in the previous case, one may just sit and wait for the Higgs to eventually show up. However Higgs decays can be easily modified in a far more dramatic fashion than the production rate. For example, Higgs may be invisible, that is decaying into very weakly interacting particles whose only signature is the unbalanced momentum in the event. Or Higgs may dominantly decay into multiparticle final states (some popular model predict decays to 4 tau leptons or 4 b-quarks) and we'll never see the bastard in the diphoton channel. That would be a very interesting scenario not only for theorists but also experimentalist, as it would require clever new methods to spot the Higgs on top of the QCD background. - There is no Higgs.
That would mean that the mechanism of electroweak symmetry breaking is inherently strongly coupled, somewhat resembling breaking of the chiral symmetry in QCD. This is the most challenging scenario for theorists and experimentalists, and the one that may require a lot of patience. In the optimistic case, the 14 TeV LHC run we will spot a number of resonances analogous to QCD mesons and little by little we'll understand the structure of the underlying gauge theory. But these resonances may well be too heavy or too wide to be efficiently studied at the LHC. Ultimately, we may need to probe the properties of the scattering amplitudes of W and Z bosons where, according to theory, these strong interactions must leave an imprint. The problem is that such a measurement is very non-trivial in the dirty LHC environment (the SSC or a linear collider would be a different story), so we may need some new theoretical or experimental ideas to make the progress. It's probably too early to bet large amounts on this scenario (which is currently disfavored by electroweak precision data) but if no hint of the Higgs is seen by the end of 2012 that will become the most promising direction.
Friday, 30 September 2011
Live from Fermilab: Chronicle of a Death Foretold
2:38 pm: It's over. The heart stopped 2:38 pm, the last store number was 9158. Good night.
2:37 pm: Helen Edwards all too eagerly pressed the big red button to dump the beam. Soon she will press the big green button to ramp down.
2:35 pm: Stop Helen, I'm afraid
2:35 pm: The heart is still beating but the brain is dead: Tevatron no longer records the data.
2:34 pm: ...though I must say that the CDF show was much more entertaining.
2:32 pm: D0 run terminated. They're ramping down.
2:29 pm: Somehow the whole ceremony reminds me of this scene.
2:28 pm: Time for D0, the better of the 2 Tevatron experiments ;-) Bill Lee from the D0 control room.
2:25 pm: The CDF run has been terminated, 2 million events collected. CDF no longer takes data.
2:22 pm: There is now a story of chickenpox children sacrificed at the altar of science. You don't want to know how it ends.
2:16 pm: Ben Kilminster live from the CDF control room says that back in 1985 there was only one monitor there. There was also no blogs, Twitter or Facebook. Clearly there is some progress...
2:15 pm: Soon the detectors will start shutting down. They don't to watch it...
2:10 pm: Tour of the control room. Looks like space movies from the 70s with lots of color lights blinking.
2:o4 pm: It started. Booooo. Pier Oddone, the director of Fermilab, speaking.
2:o1 pm: Nothing's happening yet. The stream shows photos of serious faces staring at monitors or parts of the accelerator.
1:57 pm: I wonder what will happen to the buffaloes... Will they all be slaughtered and served at the funeral party in the Wilson Hall autrium?
1:50 pm: Except for the top quark, is the Tevatron going to be remember for anything? In the coming years their measurement of the top quark and the W boson mass will remain the most precise one - the LHC will have to struggle hard to beat it. Moreover, a number of measurements - especially various production asymmetries - cannot be repeated at the LHC.
1:45 pm: The Tevatron will die today but the ghost will linger on a bit longer. Physics analyses based on the full dataset are expected only in about 5 months, for the winter 2012 Moriond conference. After that the trickle will be slowing down, but papers and analyses should will be coming up for several more years.
1:40 pm: Streaming of the execution will begin in about 5 minutes.
1:30 pm: Memorial photo of the D0 collaboration in the pit. Not much time left...
1:10 pm: Dismantling of the Tevatron will begin in about a week, shutdown, as soon as the superconducting magnets are warmed up to the room temperature. The CDF detector will also be shut down today, while D0 will be operating for 3 more months to get a sample of cosmic events for calibration purposes. I'm not aware of any plans of reusing parts of these detectors for other experiments.
12:50 pm: With the shutdown of the Tevatron, Fermilab is losing its dearest child and the place at the forefront of high-energy physics, but for a while it will remain an important laboratory running smaller scale experiments. The dark matter detector COUPP, or the neutrino experiment MINOS will be producing important results that may even make it to blogs ;-) Construction of Mu2e, an interesting experiment to study lepton flavor violation, will begin in 2013. In the long run, however, the future of Fermilab looks bleak. Most likely it will share the fate of other once great US labs, like BNL or SLAC: sliding slowly into insignificance.
12:10 pm: One more statistically significant departure from the Standard Model was reported by the Tevatron: the dijet mass bump in W+2j events at CDF. Unfortunately, the effect was not confirmed by D0. It's not clear if this will be sorted out anytime soon...
12:10 pm: The shutdown of the Tevatron should be viewed as a part of the bigger program of shutting down fundamental research in the US. It makes sense: since manufacturing could be outsourced to China, no reason why research could not.
12:05 pm: Here you can see the current status of the accelerator. The luminosity is low but the old chap should make it all the way to the end.
11:45 am: Wonder how the execution will be carried out? In the state of Illinois they do it as follows:
11:05 am: The Tevatron has 3-4 more hours to live.
11:00 am: Except for the top asymmetry, another Tevatron's measurement returned a result grossly inconsistent with the Standard Model, namely, the dimuon charge asymmetry at D0. Although the interpretation of this result in terms of anomalous CP violation in the B-meson sector has been put to doubt by recent LHCb measurements of related processes, formally the D0 result still stands 10:45 am: The gravestone is ready even before the actual death:
10:15 am: So, Tevatron Run I got the top quark. Run II, which started in 2001, had 2 major goals: find the Higgs and find new physics. From this perspective one must admit that, \begin{evenif} insert here how great job was done \end{evenif}, Run II was a disappointment.2:37 pm: Helen Edwards all too eagerly pressed the big red button to dump the beam. Soon she will press the big green button to ramp down.
2:35 pm: Stop Helen, I'm afraid
2:35 pm: The heart is still beating but the brain is dead: Tevatron no longer records the data.
2:34 pm: ...though I must say that the CDF show was much more entertaining.
2:32 pm: D0 run terminated. They're ramping down.
2:29 pm: Somehow the whole ceremony reminds me of this scene.
2:28 pm: Time for D0, the better of the 2 Tevatron experiments ;-) Bill Lee from the D0 control room.
2:25 pm: The CDF run has been terminated, 2 million events collected. CDF no longer takes data.
2:22 pm: There is now a story of chickenpox children sacrificed at the altar of science. You don't want to know how it ends.
2:16 pm: Ben Kilminster live from the CDF control room says that back in 1985 there was only one monitor there. There was also no blogs, Twitter or Facebook. Clearly there is some progress...
2:15 pm: Soon the detectors will start shutting down. They don't to watch it...
2:10 pm: Tour of the control room. Looks like space movies from the 70s with lots of color lights blinking.
2:o4 pm: It started. Booooo. Pier Oddone, the director of Fermilab, speaking.
2:o1 pm: Nothing's happening yet. The stream shows photos of serious faces staring at monitors or parts of the accelerator.
1:57 pm: I wonder what will happen to the buffaloes... Will they all be slaughtered and served at the funeral party in the Wilson Hall autrium?
1:50 pm: Except for the top quark, is the Tevatron going to be remember for anything? In the coming years their measurement of the top quark and the W boson mass will remain the most precise one - the LHC will have to struggle hard to beat it. Moreover, a number of measurements - especially various production asymmetries - cannot be repeated at the LHC.
1:45 pm: The Tevatron will die today but the ghost will linger on a bit longer. Physics analyses based on the full dataset are expected only in about 5 months, for the winter 2012 Moriond conference. After that the trickle will be slowing down, but papers and analyses should will be coming up for several more years.
1:40 pm: Streaming of the execution will begin in about 5 minutes.
1:30 pm: Memorial photo of the D0 collaboration in the pit. Not much time left...
1:10 pm: Dismantling of the Tevatron will begin in about a week, shutdown, as soon as the superconducting magnets are warmed up to the room temperature. The CDF detector will also be shut down today, while D0 will be operating for 3 more months to get a sample of cosmic events for calibration purposes. I'm not aware of any plans of reusing parts of these detectors for other experiments.
12:50 pm: With the shutdown of the Tevatron, Fermilab is losing its dearest child and the place at the forefront of high-energy physics, but for a while it will remain an important laboratory running smaller scale experiments. The dark matter detector COUPP, or the neutrino experiment MINOS will be producing important results that may even make it to blogs ;-) Construction of Mu2e, an interesting experiment to study lepton flavor violation, will begin in 2013. In the long run, however, the future of Fermilab looks bleak. Most likely it will share the fate of other once great US labs, like BNL or SLAC: sliding slowly into insignificance.
12:10 pm: One more statistically significant departure from the Standard Model was reported by the Tevatron: the dijet mass bump in W+2j events at CDF. Unfortunately, the effect was not confirmed by D0. It's not clear if this will be sorted out anytime soon...
12:10 pm: The shutdown of the Tevatron should be viewed as a part of the bigger program of shutting down fundamental research in the US. It makes sense: since manufacturing could be outsourced to China, no reason why research could not.
12:05 pm: Here you can see the current status of the accelerator. The luminosity is low but the old chap should make it all the way to the end.
11:45 am: Wonder how the execution will be carried out? In the state of Illinois they do it as follows:
...Helen Edwards, who was the lead scientist for the construction of the Tevatron in the 1980s, will terminate the final store in the Tevatron by pressing a button that will activate a set of magnets that will steer the beam into the metal target. Edwards will then push a second button to power off the magnets that have been guiding beams through the Tevatron ring for 28 years...I think there should be 3 people, each pressing a button, only one of which is actually connected to the kicker...11:40 am: It's a beautiful autumn day here in Fermilab today, unusually beautiful. Nature refuses to mourn.
11:05 am: The Tevatron has 3-4 more hours to live.
11:00 am: Except for the top asymmetry, another Tevatron's measurement returned a result grossly inconsistent with the Standard Model, namely, the dimuon charge asymmetry at D0. Although the interpretation of this result in terms of anomalous CP violation in the B-meson sector has been put to doubt by recent LHCb measurements of related processes, formally the D0 result still stands 10:45 am: The gravestone is ready even before the actual death:
9:50 am: Except for the top quark, what were the most important findings of the Tevatron? See the list at Tommaso's blog.9:30am: Tevatron's observation of the anomalous top-antitop forward-backward asymmetry is currently the strongest hint that there may be new physics. The fact it is the strongest is not really encouraging ;-)
9:15am: A bit of nostalgia: a page in Particle Data Group from 1996
9:00am: The LHC is leading the game in most of the Higgs search channels, but for the moment the Tevatron has a far better sensitivity to a light Higgs boson decaying to a pair of b-quarks. Interestingly, they see no excess in this channel (the excess in the combination comes mostly from the H to WW channel), even though they should if the Higgs is there...
8:40am: The eulogies have begun. For the next 2 hours I'll listen to the summary of the most important results obtained by the D0 collaboration.
8:30am: They're still accumulating antiprotons; a sort of life support in case the Tevatron trips before the scheduled time.
8:10am: The last store of protons and antiprotons is circulating in the ring since last evening. Current luminosity: 100 ub/sec, more than 3 times below the peak luminosity. Clearly, the Tevatron is already flatlining.
8:00am: The Tevatron will go down in history as the place where back in 1995 they discovered the top quark - probably the heaviest elementary particle.
7:50am: Tevatron's first beam was in 1983 so he's dying at 28. One year more than Janis Joplin, Jimmy Hendrix, Jim Morrison, Kurt Cobain and Amy Winehouse. What's similar is that death is coming is when the career is already on the decline.
7:45am: I'm wide awake, it's morning in Fermilab. Putting on my best suit and setting off to the funeral. In less than 7 hours the Tevatron will be no more...
Friday, 23 September 2011
The Phantom of OPERA
Those working in science are accustomed to receiving emails starting with "dear sir/madam, please look at the attached file where I'm proving einstein theory wrong". This time it's a tad more serious because the message comes from a genuine scientific collaboration... As everyone knows by now, the OPERA collaboration announced that muon neutrinos produced at CERN arrive to a detector 700 kilometers away in Gran Sasso about 60 nanoseconds earlier than expected if they traveled at the speed of light (incidentally, trains traveling the same route arrive always late). The paper is available on arXiv, and the video from the CERN seminar is here.
OPERA is an experiment who has had some bad luck in the past. Its original goal was to study neutrino oscillations by detecting the appearance of tau neutrinos in a beam of muon neutrinos. However due to construction delays their results arrive too late to have any impact on measuring the neutrino masses and mixing; other experiments have in the meantime achieved a much better sensitivity to to these parameters. Moreover, the "atmospheric" neutrino mass difference, which enters the probability of a muon neutrino oscillating into a tau one, turned out to be at the lower end of the window allowed when OPERA was being planned. As a consequence, a fairly small number of oscillation events is predicted to occur on the way to Italy, leading to the expectation of about 1-2 tau events to be recorded during experiment's lifetime (they were lucky to already get 1). However they will not walk off the stage quietly. What was meant to be a little side analysis returned the result that neutrinos travel faster than light, confounding the physics community and wreaking havoc in the mainstream media.
I'm not very original in thinking that the result is almost certainly wrong. The main experimental reason, already discussed on blogs, is the observation of neutrinos from the supernova SN1987A. Back in 1987, three different experiments detected a burst of neutrinos, all arriving within 15 seconds and 2-3 hours before the visible light (which agrees with models of supernova explosion). On the other hand, if neutrinos traveled as fast as OPERA claims, they should have arrived years earlier. Note that the argument that OPERA is dealing with muon neutrinos while supernovae produce electron ones is not valid: electron neutrinos have enough time to oscillate to other flavors on the way from the Large Magellanic Clouds. One way to reconcile OPERA with SN1987A would be to invoke a strong energy dependence of the neutrino speed (it should be steeper than Energy^2), since the detected supernova neutrinos are in the 5-40 MeV range, while the energy of the CERN-to-Gran-Sasso beam is 20 GeV on average. However OPERA does not observe any significant energy dependence of the neutrino speed, so that is an unlikely explanation either.
From the point of view of theory the chances that the OPERA result being true are no better as there is no sensible model of tachyonic neutrinos. At the same time, we've been observing neutrinos in numerous experiments and in various different settings, for example in beta decay, from terrestrial nuclear reactors, from the Sun, in colliders as missing energy, etc. Each time they seem to behave like ordinary fermions obeying all rules of the local Lorentz invariant quantum field theory.
We should weigh this evidence against the analysis of OPERA which does not appear rock solid. Recall that OPERA was conceived to observe tau neutrino appearance, not to measure the neutrino speed, and indeed there are certain aspects of the experimental set-up that call for caution. The most worrying is the fact that OPERA has no way to know the precise production time of a neutrino it detects, as it could be produced anytime during a 10 microsecond long proton pulse that creates the neutrinos at CERN. To go around this problem they need a statistical approach. Namely, they measure the time delay of the neutrino arrival in Gran Sasso with respect to the start of the proton pulse at CERN. Then they fit the time distribution to the templates based on the measured shape of the proton pulse, assuming various hypotheses about the neutrino travel time. In this manner they find that the best fit is for the travel time is 60 nanoseconds smaller than what one would expect if the neutrinos traveled at the speed of light. However, one could easily imagine that the systematic errors of this procedure have been underestimated, for example, the shape of the rise and the fall-off of the proton pulse have been inaccurately measured. OPERA does a very good job arguing that the distance from CERN to Gran Sasso can be determined to 20 cm precision, or that synchronizing the clocks in these two labs is possible to 1 nanosecond precision, but the systematic uncertainties on the shape of the proton pulse are not carefully addressed (and, during the seminar at CERN, the questions concerning this issue were the ones that confounded the speaker the most).
So what's next? Fortunately OPERA appears to be open for discussion and scrutiny, thus the issue of systematic uncertainties should be resolved in the near future. Simultaneously, the MINOS collaboration should be able to repeat the measurement with similar if no better precision, and I'm sure they're already sharpening their screwdrivers. In the longer timescale, OPERA could try to optimize the experimental setting for the velocity measurement. For example, they might install a near detector on the CERN site (where there should be no effect if the current observation is due to neutrinos traveling faster than light, or there should be a similar effect if there is an unaccounted for systematic error in the production time). Or they could use shorter proton pulses, so that the neutrino production time can be determined without statistical gymnastics (it appears feasible - the LHC currently works with 5 ns bunches). I bet, my private level of confidence being 6 sigma, that the future checks will demonstrate that neutrinos are not superluminal... in the end the character from the original book turned out to be 100% human. But, of course, the ultimate verdict belongs not to our preconceptions but to experiment.
OPERA is an experiment who has had some bad luck in the past. Its original goal was to study neutrino oscillations by detecting the appearance of tau neutrinos in a beam of muon neutrinos. However due to construction delays their results arrive too late to have any impact on measuring the neutrino masses and mixing; other experiments have in the meantime achieved a much better sensitivity to to these parameters. Moreover, the "atmospheric" neutrino mass difference, which enters the probability of a muon neutrino oscillating into a tau one, turned out to be at the lower end of the window allowed when OPERA was being planned. As a consequence, a fairly small number of oscillation events is predicted to occur on the way to Italy, leading to the expectation of about 1-2 tau events to be recorded during experiment's lifetime (they were lucky to already get 1). However they will not walk off the stage quietly. What was meant to be a little side analysis returned the result that neutrinos travel faster than light, confounding the physics community and wreaking havoc in the mainstream media.
I'm not very original in thinking that the result is almost certainly wrong. The main experimental reason, already discussed on blogs, is the observation of neutrinos from the supernova SN1987A. Back in 1987, three different experiments detected a burst of neutrinos, all arriving within 15 seconds and 2-3 hours before the visible light (which agrees with models of supernova explosion). On the other hand, if neutrinos traveled as fast as OPERA claims, they should have arrived years earlier. Note that the argument that OPERA is dealing with muon neutrinos while supernovae produce electron ones is not valid: electron neutrinos have enough time to oscillate to other flavors on the way from the Large Magellanic Clouds. One way to reconcile OPERA with SN1987A would be to invoke a strong energy dependence of the neutrino speed (it should be steeper than Energy^2), since the detected supernova neutrinos are in the 5-40 MeV range, while the energy of the CERN-to-Gran-Sasso beam is 20 GeV on average. However OPERA does not observe any significant energy dependence of the neutrino speed, so that is an unlikely explanation either.
From the point of view of theory the chances that the OPERA result being true are no better as there is no sensible model of tachyonic neutrinos. At the same time, we've been observing neutrinos in numerous experiments and in various different settings, for example in beta decay, from terrestrial nuclear reactors, from the Sun, in colliders as missing energy, etc. Each time they seem to behave like ordinary fermions obeying all rules of the local Lorentz invariant quantum field theory.
We should weigh this evidence against the analysis of OPERA which does not appear rock solid. Recall that OPERA was conceived to observe tau neutrino appearance, not to measure the neutrino speed, and indeed there are certain aspects of the experimental set-up that call for caution. The most worrying is the fact that OPERA has no way to know the precise production time of a neutrino it detects, as it could be produced anytime during a 10 microsecond long proton pulse that creates the neutrinos at CERN. To go around this problem they need a statistical approach. Namely, they measure the time delay of the neutrino arrival in Gran Sasso with respect to the start of the proton pulse at CERN. Then they fit the time distribution to the templates based on the measured shape of the proton pulse, assuming various hypotheses about the neutrino travel time. In this manner they find that the best fit is for the travel time is 60 nanoseconds smaller than what one would expect if the neutrinos traveled at the speed of light. However, one could easily imagine that the systematic errors of this procedure have been underestimated, for example, the shape of the rise and the fall-off of the proton pulse have been inaccurately measured. OPERA does a very good job arguing that the distance from CERN to Gran Sasso can be determined to 20 cm precision, or that synchronizing the clocks in these two labs is possible to 1 nanosecond precision, but the systematic uncertainties on the shape of the proton pulse are not carefully addressed (and, during the seminar at CERN, the questions concerning this issue were the ones that confounded the speaker the most).
So what's next? Fortunately OPERA appears to be open for discussion and scrutiny, thus the issue of systematic uncertainties should be resolved in the near future. Simultaneously, the MINOS collaboration should be able to repeat the measurement with similar if no better precision, and I'm sure they're already sharpening their screwdrivers. In the longer timescale, OPERA could try to optimize the experimental setting for the velocity measurement. For example, they might install a near detector on the CERN site (where there should be no effect if the current observation is due to neutrinos traveling faster than light, or there should be a similar effect if there is an unaccounted for systematic error in the production time). Or they could use shorter proton pulses, so that the neutrino production time can be determined without statistical gymnastics (it appears feasible - the LHC currently works with 5 ns bunches). I bet, my private level of confidence being 6 sigma, that the future checks will demonstrate that neutrinos are not superluminal... in the end the character from the original book turned out to be 100% human. But, of course, the ultimate verdict belongs not to our preconceptions but to experiment.
Wednesday, 14 September 2011
Summer's almost gone
The season of the year known as summer conferences is over now. What will follow is probably two quiet months when particle physicists make provisions for winter. The LHC has recently restarted at a higher-than-ever luminosity in a bid to double the 2011 data set. Interesting new results are therefore not expected before November when the current run will end. All in all, this is a perfect moment for a post of the sort d'où venons nous blabla. Here's a summary of the most important events and cultural phenomenons of the past summer.
To finish with important events of this summer, Resonaances now has a Twitter account. Well, these days every celebrity has one ;-) Disappointingly, you won't learn from it what I had for breakfast, or about my views on the political tensions in southern Uzbekistan. It is going to be a low-traffic twitter limited to announcing new posts on Resonaances, pointing to interesting papers, blog posts and articles elsewhere, and spreading lesser rumors and gossips.
- Higgs Chase
It was like one of these action movies where a fugitive surrounded by hundreds of police with guns and helicopters manages to escape disguised as a hostage. The odds of discovering Higgs this summer were significant, but the bugger chose its parameters so as to maximize the difficulty of being found. Nevermind, next time. A plot from this talk of Bill Murray, which is just an extrapolation of the current search sensitivity to larger data sets, shows that we're very close now to ultimate answers. With 5 inverse femtobarns of data CMS alone should be able to get a 3-4 sigma exclusion even in the worst possible case of a 115 GeV Higgs. ATLAS looks worse on that plot because their pT threshold for detecting leptons is set higher than that of CMS. This fact does not matter too much for a moderately heavy Higgs, but for a light one it punishes them (it's then easier to miss the H → WW → 2l 2ν decay which would produce rather soft leptons). If this can be improved we'll get an even better reach after combination of ATLAS and CMS data, close to the CMSx2 line. So by the end of the year we should know much more than today: 5 sigma discovery probably no, 3 sigma hint or exclusion probably yes. - Apocryphal Combinations
That was definitely the hit of the summer, on par with James Blunt. Although experts warn about their quality, although CERN authorities threaten with corporal punishment to anyone caught watching one, bootleg combinations of ATLAS and CMS Higgs results are thriving on blogs and even in LHC experimenters' talks. Coming next are apocryphal data analyses and, who knows, maybe apocryphal colliders. - Conference Revival
During the past decade particle physics conferences have been a sad and boring display. With the LHC running at full speed things have changed a lot. At least in theory, a spectacular discovery may now occur anytime which creates big expectations and excitement around the major conferences. Of course, in the 21st century there is no logical reason to present results at conferences. Experimental results could be presented for example once they're ready, and the presentations more efficiently via internet. Nevertheless, one should not neglect the important convivial aspects of conferences which play a similar role to maypole festivities in pagan societies. Not to mention that the conference deadlines provides an efficient whip for PhD students to finish the analysis in time. - SUSY Scorned
They say you don't kick a man when he's down. Another approach, more popular where I come from, is that it is precisely the best moment as he has limited options to retaliate. I'm somewhat torn between these two approaches. On one hand, watching someone once noble being tarred-and-feathered is always delightful. On the other hand, I sympathize with the view that the backlash against SUSY that is currently unfolding in the mainstream media has no logical grounds. Before the LHC one had to believe in a hundred of new particles just behind the corner who conspire not to break any of the accidental/approximate symmetries of the Standard Model such as the baryon, flavor, or CP symmetry, and in addition their contributions to the electroweak scale accidentally cancel at the 1% level. Now one has to believe in a hundred of new particles just behind the corner who conspire not to break any of the accidental symmetries of the Standard Model, whose contributions to the electroweak scale accidentally cancel at the 1% level, and who do not produce spectacular signals in the early LHC data. In this sense, the summer 2011 LHC results only infinitesimally changed the situation of supersymmetry. - Nihil Novi
Unfortunately, it's not only SUSY that is missing; new physics in whatever form obstinately refuses to show up. Not even a rumor these days... Especially disappointing is that the LHC, unlike the Tevatron, does not see any non-standard effects in top physics. Before, I was estimating the chances of LHC discovering new physics at about 50%. Now it is closer to 33%. The moment we discover the Higgs looking roughly like the Standard Model one, these chances will drop face down... - Sunset of Tevatron
This summer we have watched the Tevatron falling helter-skelter into obsolete. In most of the analyses the sensitivity of the LHC is now far superior. There are some notable exceptions though, such as the top and W mass precision measurement, light Higgs decays to b-quarks, and the forward-backward asymmetry of the top quark production. For these and other reasons, even though Tevatron's life-support will be switched off at the end of this month, the ghost will haunt us a little longer.
To finish with important events of this summer, Resonaances now has a Twitter account. Well, these days every celebrity has one ;-) Disappointingly, you won't learn from it what I had for breakfast, or about my views on the political tensions in southern Uzbekistan. It is going to be a low-traffic twitter limited to announcing new posts on Resonaances, pointing to interesting papers, blog posts and articles elsewhere, and spreading lesser rumors and gossips.
Wednesday, 7 September 2011
Cresstfallen
Until yesterday CRESST was a sort of a legend: everybody heard of them but nobody ever saw them. That is to say, the CRESST excess has been informally discussed for a long time. Moreover, the events from one of the modules displaying an excess of events in the oxygen band have been shown at conferences since more than a year. However we were in the dark about the significance of the excess, backgrounds and systematic effects. Now CRESST has finally come out with a paper that spells out the excess and provides interesting details.
CRESST is in a way the fanciest of all dark matter experiments. Located in the Gran Sasso underground laboratory, it uses the target of CaWO4 crystals cooled down to 10 miliKelvins. When a particle scatters inside the target the deposited energy is converted into phonons and scintillation light, both of which can be detected. The light-to-phonon ratio helps discriminating the dark matter signal from backgrounds, for example electrons and photons produce mostly light. Furthermore, that ratio depends on the atom of the crystal molecule on which the scattering occurred: it is largest for oxygen, intermediate for calcium, and smallest for tungsten. This leads to characteristic bands in the light yield vs. recoil energy plane that you can see in the plot above showing events from one of the eight CRESST modules used in this analysis. These bands provide another handle on the signal, as heavy dark matter would show up mostly via scattering on tungsten, while the light one would pop up in the oxygen band.
At the same time CRESST is paying a price for their innovative technology, as they have to deal with incalculable and sometimes unexpected backgrounds. Apart from the usual neutron background and the leakage of e/γ events into the signal region they had to face α particles and Pb atoms emitted from the clamps holding the crystals, not to mention the exhaust fumes from the nearby DAMA detector. Some of these backgrounds will be reduced in future runs, but for the moment CRESST needs to estimate their contribution in the signal region using sideband analysis. Having done so, CRESST finds that a fraction (slightly less than a half) of the 67 events in the signal region cannot be understood in terms of the known backgrounds. Therefore they study the likelihood of the background plus dark matter signal hypothesis assuming vanilla elastic scattering of dark matter on the target. Here is their result for the preferred mass and cross section of the dark matter particle:
The likelihood function has 2 minima corresponding to 4.7 and 4.2 sigma rejection of the background-only hypothesis. We can safely forget about the deeper one: for these parameters Xenon100, CDMS and Edelweiss would see an elephant in their data. The shallower minimum, where the preferred dark matter mass is 9-15 GeV, also seems excluded by orders of magnitude. This one however lies in the tantalizing proximity to the CoGeNT and DAMA preferred region; actually the mass region (though not the cross section) perfectly agrees with the DAMA low-mass region. Some argue that CDMS and Xenon collaboration grossly overestimate their sensitivity near the threshold. This may be imagined in the case of 5-7 GeV dark matter, in which case combining experimental and astrophysical uncertainties with some good will and the presumption of innocence one can try to argue that the CoGeNT signal is marginally consistent with the Xenon and CDMS exclusion limits. On the other hand, 10 GeV dark matter would produce observable signals further away from the threshold of these 2 experiments, and it's unlikely it could escape their attention. Therefore, given CRESST is facing pesky backgrounds very similar to the suspected signal (both in spectral shape and the order of magnitude), the hypothesis of unknown and/or underestimated backgrounds faking the signal is currently the most probable one.
Summarizing, the new CRESST results are welcome and illuminating but they do not change significantly the landscape of dark matter searches. Clearly, experiment is closing in on IDM; what is not clear is whether that stands for Inelastic or Italian Dark Matter ;-)
See also Lubos, Matt, and again Matt.
CRESST is in a way the fanciest of all dark matter experiments. Located in the Gran Sasso underground laboratory, it uses the target of CaWO4 crystals cooled down to 10 miliKelvins. When a particle scatters inside the target the deposited energy is converted into phonons and scintillation light, both of which can be detected. The light-to-phonon ratio helps discriminating the dark matter signal from backgrounds, for example electrons and photons produce mostly light. Furthermore, that ratio depends on the atom of the crystal molecule on which the scattering occurred: it is largest for oxygen, intermediate for calcium, and smallest for tungsten. This leads to characteristic bands in the light yield vs. recoil energy plane that you can see in the plot above showing events from one of the eight CRESST modules used in this analysis. These bands provide another handle on the signal, as heavy dark matter would show up mostly via scattering on tungsten, while the light one would pop up in the oxygen band.
At the same time CRESST is paying a price for their innovative technology, as they have to deal with incalculable and sometimes unexpected backgrounds. Apart from the usual neutron background and the leakage of e/γ events into the signal region they had to face α particles and Pb atoms emitted from the clamps holding the crystals, not to mention the exhaust fumes from the nearby DAMA detector. Some of these backgrounds will be reduced in future runs, but for the moment CRESST needs to estimate their contribution in the signal region using sideband analysis. Having done so, CRESST finds that a fraction (slightly less than a half) of the 67 events in the signal region cannot be understood in terms of the known backgrounds. Therefore they study the likelihood of the background plus dark matter signal hypothesis assuming vanilla elastic scattering of dark matter on the target. Here is their result for the preferred mass and cross section of the dark matter particle:
The likelihood function has 2 minima corresponding to 4.7 and 4.2 sigma rejection of the background-only hypothesis. We can safely forget about the deeper one: for these parameters Xenon100, CDMS and Edelweiss would see an elephant in their data. The shallower minimum, where the preferred dark matter mass is 9-15 GeV, also seems excluded by orders of magnitude. This one however lies in the tantalizing proximity to the CoGeNT and DAMA preferred region; actually the mass region (though not the cross section) perfectly agrees with the DAMA low-mass region. Some argue that CDMS and Xenon collaboration grossly overestimate their sensitivity near the threshold. This may be imagined in the case of 5-7 GeV dark matter, in which case combining experimental and astrophysical uncertainties with some good will and the presumption of innocence one can try to argue that the CoGeNT signal is marginally consistent with the Xenon and CDMS exclusion limits. On the other hand, 10 GeV dark matter would produce observable signals further away from the threshold of these 2 experiments, and it's unlikely it could escape their attention. Therefore, given CRESST is facing pesky backgrounds very similar to the suspected signal (both in spectral shape and the order of magnitude), the hypothesis of unknown and/or underestimated backgrounds faking the signal is currently the most probable one.
Summarizing, the new CRESST results are welcome and illuminating but they do not change significantly the landscape of dark matter searches. Clearly, experiment is closing in on IDM; what is not clear is whether that stands for Inelastic or Italian Dark Matter ;-)
See also Lubos, Matt, and again Matt.
Subscribe to:
Posts (Atom)