Friday, 31 December 2010

2010 Highlights

It's the end of the year when blogosphere and old-fashioned press alike indulge in a nostalgic mood. Here is my list of the most exciting events of the passing year in the field of particle physics. From the year 2010 I remember (in chronological order):
  • CoGeNT, for making us drunk with light dark matter.
    This experiment created the largest stir in theory this year. CoGeNT, a dark matter detection experiment, announced that it could be seeing dark matter with a relatively light mass, around 10 GeV. The dominating paradigm is dark matter at the weak scale, 100 GeV to 1 TeV, but the CoGeNT result made us stop and think about a wider range theoretical possibilities. Unfortunately, recent exclusion limits results from Xenon10, Xenon100 and CDMS make it highly unlikely that CoGeNT is really observing dark matter. Nevertheless, the lesson we have learned is that dark matter does not have to be where everyone is looking.
  • D0, for keeping the hopes for new physics alive.
    Good old Tevatron gave us one very intriguing result this year. The D0 collaboration looked into same-sign di-muon events, and found that events with two negative muons occur 1 percent more often that those with two positive muons. This result can be interpreted as CP violation in the B-meson system: the Bbar-mesons oscillate into B-mesons a bit more often than the other way around. The Standard Model predicts such an effect, but the asymmetry should be 100 times smaller that what is observed. Is this new particles contributing to the B-meson mixing? Or did D0 screw up? The jury is still out.
  • PSI, for extending the new physics battlefield into atom spectroscopy.
    The surprise of the year, no doubt about it. A laser spectroscopy experiment at PSI measured the Lamb shift in muonic hydrogen, and found it to be 5 sigma away from the prediction based on theory and earlier experiments with ordinary hydrogen. Given that simple new physics models cannot provide a consistent explanation, and that QED is doing shamelessly well everywhere else, we all expect that some theoretical or experimental error is at the root of this anomaly. But the possibility that some quirky new physics manifests itself here is still hanging in the air.
  • Tevatron, for its tireless Higgs chase.
    Tevatron gave us also a completely expected yet very cute result. 10 years ago LEP excluded the Higgs masses below 115 GeV, now Tevatron tells us that Higgs between 156-175 GeV is not the right answer either. Combining that with precision electroweak tests, we deduce that Higgs is cowardly hiding somewhere between 115 and 155 GeV. Poor bastard is thus cornered, and with the LHC joining in the chase he should surrender in no time. Unless he is not there after all...
  • LHC, for the overall impression.
    After a series of setbacks and delays this year LHC surprised us, for a change, with a stream of good news. We had been told that the first year would be a total mess, as it should take a long time to understand the detectors enough to produce meaningful results. Instead, physics results have been delivered basically from day 1, even in difficult channels like jets + missing energy. LHC already published several important limits, e.g. on 4-quark operators (gracefully called "bounds on compositeness"), or on high-energy high-multiplicity events (under the sexy name of "limits on black hole production"). And much more is due to arrive for the winter conferences. It's easy to predict that the LHC will make it to the 2011 highlights on Resonaances; the only question is whether I will remember it for "important limits" again, or for crazy new discoveries...

Thursday, 23 December 2010

Is the CKM matrix going to crack?

During the last decade the Standard Model description of flavor transitions has been put to multiple tests, especially in the B-meson sector. The overall agreement between theory and experiment is excellent, much better than what we should expect assuming exotic particles lurking just behind the corner. Here and there, however, one finds a few glitches - most likely experimental flukes or underestimated theory errors but intriguing enough to keep a flicker of hope alive. This year there has been a lot of commotion about the D0 observation of the same sign di-muon asymmetry, since the Standard Model predicts this effect should be well below the current experimental precision. If the D0 result is confirmed, it would be a clear indication of new physics contribution to CP violation in the mixing of neutral B-mesons. Another, less publicized 3-sigma blip is the tension between:
  • the CP asymmetry in the Bd meson decay into J/ψ + kaon,
  • the branching fraction of the decay of a charged B meson into a tau lepton and a tau neutrino.
This tension has been around for a while, but below I'll follow a more recent presentation by Lunghi and Soni who put a slightly different twist to it.

All this fuzz is about measuring the entries of the CKM matrix - a 3x3 unitary matrix that is the source of all flavor violation in the Standard Model. See the usual parametrization pasted on the right. The parameters λ and A are well measured in several different ways that yield consistent results. Therefore one is more interested in constraints on the remaining two parameters called ρ and η. The 2 processes mentioned in the previous paragraph are sensitive to slightly different combinations of these parameters. The B → τν decay proceeds at tree-level via an off-shell W-boson, so the branching fraction is proportional to the Vub, that is the (13) element. Thus, the measurement of this branching fraction carves out a circle in the ρ,η plane. On the other hand, the CP asymmetry Bd → J/ψ K is due to an interference of tree-level decays and one-loop B-meson mixing, and the final result depends on Sin(2β) where β ∼ Arg[Vtb Vtd/Vcs Vcb ] is one of the angles in the unitarity triangle. This measurement appears as a diagonal line in the ρ,η plane. Now let us see how these two processes combine with several other measurements of ρ and η:
The point is that one can reconcile either of the two measurements with the other constraints on ρ,η but accommodating both is difficult. For example, in the upper plot B → τν is included in the fit to ρ and η. That best fit value uniquely predicts Sin(2β), but the result is off from the experimental value by more than 3 sigma. Conversely, if one uses Bd → J/ψ K in the fit, then B → τν is off by almost 3 sigma. The authors prefer the former interpretation because it provides a better overall consistency of the fit. This interpretation is also more plausible from the new physics point of view, since in general it is easier for new physics to compete with Standard Model loop processes than with tree-level processes. Moreover, this way it may go along better with the D0 di-muon anomaly as the latter is also related to B-meson mixing...

Now, how large is the tension clearly depends on the choice of observables going into the fit,
as well as on your personal beliefs in the errors quoted by various theoretical, experimental and lattice groups whose results enter the fit. For example, in the similar plots presented by the CKMfitter collaboration the errors are more conservative and the tension is not apparent. Clearly, on tabloid blogs such as Resonaances the aggressive approach is promoted, but one should remember that the cautious approach to flavor anomalies is usually right, at least historically. Asymptotically in the future, the new generation of B-factories (who should go online in late two-thousand-teens) will shrink the errors and swipe the floor. In a shorter time perspective, updates from Tevatron may clarify or further blur the situation. And then we're dying to see LHCb joining in the game, some time next year. But the last one is a perfect subject for a separate post...

Friday, 3 December 2010

Update on Muonic Hydrogen

5 months ago an experimental group at PSI announced the measurement of the Lamb shift in muonic hydrogen. Since muon is 200 times heavier than electron, the muonic hydrogen atom is 200 times smaller than the ordinary hydrogen. Therefore, finite proton size effects are far more pronounced in the former, and end up contributing as much as 2 percent to the Lamb shift. Assuming that the system is adequately described by QED, the PSI result can be interpreted as a new measurement of the size (charge radius) of the proton. The surprise was the deduced value of the charged radius turned out to be inconsistent at the 5 sigma level with the previous determinations based on the spectroscopy of hydrogen and electron-proton scattering data. Something is wrong. Either there is an experimental error, or there is an error in the theoretical computations of the Lamb shift, or maybe some new forces are in the game.

Of course, it is the last of the above possibilities that makes the anomaly attractive to hoards of hungry-eyed particle theorists. In fact, it's not the first mysterious result related to the muon: a 3-point-something-sigma anomaly in the muon anomalous magnetic moment has been nudging us for years. It is tempting to speculate that both these muon anomalies have a common explanation in terms of yet unknown fundamental forces. Furthermore, as I explained here, new hidden forces have recently become very popular in the particle circles for other, completely unrelated reasons. Yet ArXiv has not been flooded with theory papers on muonic hydrogen, so far. The reason is that it's difficult to write down a new physics model that explains the measured Lamb shift without violating constraints from atomic precision physics. The most painful constraints come from
  • Ordinary hydrogen spectroscopy,
  • Anomalous magnetic moment of the electron,
  • Low energy neutron scattering experiments,
  • Interactions of neutrinos with matter.
These constraint exclude popular models, such as the hidden photon or a new Higgs-like scalar, as the explanation of the anomaly. One could thus conclude that there is nothing interesting here and move on. Or one could promote the positive attitude. Like in this paper last week by David Tucker-Smith and Itay Yavin who, apart from mounting difficulties, also proposes a solution.

The paper proposes how to shift the energy levels of muonic hydrogen without violating other experimental constraints. The first part is easy: a scalar or vector particle could provide for the new attractive force that does the job. One possibility is to take the mass of the new particle to be of order MeV, and the coupling to muons and protons of order $10^{-4}$ (the contribution to the Lamb shift scales as $g_\mu g_p/m^2$ for m above 1 MeV and $g_\mu g_p m^2/m_\mu^4$ for m less than 1 MeV; thus other choices of the parameters are possible, for example, for a larger mass one would need correspondingly larger couplings). With the couplings and the mass in the same ballpark one could also obtain a new contribution to the muon anomalous magnetic moment that resolves the tension with experiment, see the blue band in the plot.

Now comes the tricky part, that is addressing other experimental constraints. There are some older muonic-atom experiments, for example the one with Mg and Si, who constrain the couplings of new force carriers to muons and protons. However, they are not inconsistent with the coupling strength needed to explain the muonic hydrogen anomaly. But it seems the new force carrier has to couple only to muons and protons and virtually nothing else. For example, the coupling to electrons has to be at least an order of magnitude smaller than that to muons in order to avoid excessive contributions to the anomalous magnetic moment of the electron. The coupling to neutrons is even more strongly constrained by some prehistoric experiments (from 1966!!! back when England last won the world cup!!! ;-) involving low energy neutrons scattering on lead atoms. Furthermore, B-factories strongly constrain the couplings to b-quarks, neutrino experiments strongly constrain the couplings to neutrinos, and so on.

It is simple to cook up a model where the coupling of the new force carrier to electrons is suppressed (a particle coupled to mass), or when the coupling to neutrons is suppressed (a particle coupled to charge), but to achieve both at the same time is a model-building challenge. However this possibility cannot be excluded in a model independent manner, so it open to experimental verification. If a new force carrier is the reason for the muonic anomalies, there should be shifts in the spectrum of other muon systems, such as muonic helium or the true muonium (a bound state of muon and antimuon). Those systems have not been investigated yet, but with the present technology they seem to be within reach. So, if you have some free time this weekend you could try to make the true muonium and measure its energy levels. Depending on the result, life could get very interesting, or it could get as usual...

See also here and here to better appreciate the problems with model building. For a fresh review and reevaluation of the standard QED contributions to the muonic hydrogen energy levels, see here.

Monday, 15 November 2010

Wooden Stake for CoGeNT

There is a new interesting paper from CDMS that excludes an important region of the parameter space of dark matter models.

First, a short summary of previous episodes. Earlier this year CoGeNT made a claim of possible detection of dark matter. CoGeNT is a relatively small dark matter experiment using a germanium detector located in the Soudan mine. The spectrum of events they registered during the first months of operation is consistent with scattering of dark matter particles with the mass of order 10 GeV and the cross section on nucleons of order 10^-40cm2. Dark matter in this mass ballpark could also fit 1) the long-standing DAMA modulation signal, 2) the 2 events observed by CDMS last year, and 3) the oxygen band excess reported by the CRESST experiment. These developments came somewhat unexpected to most of us, as the dominant theoretical prejudice would place dark matter at a somewhat heavier scale, 100 GeV or so. Following this prejudice, the majority of dark matter experiments were optimizing their search strategies for the weak scale dark matter, neglecting the light mass region. The typical recoil resulting from a 10 GeV particle scattering in a detector would be too small to pass the threshold set by most experiments. The advantage of CoGeNT is a very low energy threshold, 0.4 keV in ionization energy translating to about 2 keV true recoil energy. This is the key reason why they could achieve a better sensitivity for light dark matter than the big guys in the detection business such as the CDMS and Xenon collaborations whose analysis threshold had been higher.

Nevertheless, the big guys didn't despair, but have worked toward improving sensitivity in the CoGeNT region. First came Xenon100. Using their early data they were able exclude the region of the parameter space consistent with the CoGeNT signal. The precise extent of their exclusion region is however controversial, because it strongly depends on poorly measured scintillation efficiency in xenon at low recoil energies, the so-called Leff parameter. Using more conservative assumptions about Leff, some of the CoGeNT parameter space remains allowed. Furthermore, the limits on light dark matter critically depend on certain unknown properties of dark matter, such as its velocity distribution in our galaxy; changing some assumptions could result in an enhanced event rate in a germaniun detector as compared to a xenon detector. For all these reasons, the Xenon100 exclusion was not considered conclusive.

Now the situation is clarified when the CDMS collaboration has recycled their old data so as to improve sensitivity for light dark matter. They lowered their recoil energy threshold down to 2 keV (as compared to 10 keV in their previous analysis). Lowering the threshold comes with the price, as at such low recoil CDMS cannot use the phonon timing cuts to better differentiate nuclear recoils (expected from dark matter scattering events) from electron recoils produced by all sorts of pesky backgrounds. The discriminating variable that remains available is the ionization yield (nuclear recoils typically produce small ionization, in a well-defined band) but that is not enough to get rid all of the events, see the plot above. Thus, whereas previous CDMS searches were expecting less than 1 background event, the new analysis has to deal with hundreds. Still, the dark matter cross section on nucleons that would be consistent with the CoGeNT signal would produce many more events than CDMS has observed. Assuming that all observed events come from dark matter (this is very conservative, as they are able to assign these events to known sources, such as surface events or noise) allows them to set pretty tight limits on the cross section in the low mass, see the solid black line in the upper plot. The CoGeNT region (shaded blue) is now comfortably excluded.

CDMS uses the the same germanium target as CoGeNT, so even theorists may find it hard to come with an explanation how dark matter could produce a signal in one and not in the other. Therefore it seems safe to pronounce the CoGeNT signal dead. Too bad. However the dark matter detection business regularly produces new entertainment; maybe the much-expected soon-to-come one-year Xenon100 results will provide us some?

Monday, 25 October 2010

The die has been RECAST

RECAST is an idea toward a more efficient use of experimental data collected by particle physics experiments. A paper outlining the proposal appeared on ArXiv 2 weeks ago. In order to explain what RECAST is and why it is good I need to make a small detour.

In the best of all worlds, all experimental data acquired by humanity would be stored in a convenient format and could be freely accessed by everyone. Believe it or not, the field of astrophysics is not so far from this utopia. The policy of the biggest sponsors in that field - NASA and ESA - is to require that more-or-less data (sometimes a pre-processed form) are posted some time, typically 1-2 years, after the experiment starts. This policy is followed by such cutting-edge experiments as WMAP, FERMI, or, in the near future, Planck. And it is not a futile gesture: quite a few people from outside of these collaborations have made a good use these publicly available data, and more than once maverick researchers have made important contributions to physics.

Although the above open-access approach appears successful, it is not being extended to other areas of fundamental research. There is a general consensus that in particle physics an open-access approach could not work because:
  • bla bla bla,
  • tra ta ta tra ta ta,
  • chirp chirp,
  • no way.
Consequently, data acquired by particle physics collaborations are classified and never become available on the outside of the collaboration. However, our past experience suggests that some policy shift might be in order. Take for example the case of the LEP experiment. Back in the 90s the bulk of experimental analyses was narrowly focused on a limited set of models, and it is often difficult or impossible to deduce how these analyses constrain more general models. One disturbing consequence is that up to this day we don't know for sure whether the Higgs boson was beyond LEP's reach or whether it was missed because it has unexpected properties. After LEP's shutdown, new theoretical developments suggested new possible Higgs signatures that were never analyzed by the LEP collaborations. But now, after 10 years, accessing the old LEP data requires extensive archeological excavations that few are willing to undertake, and in consequence scores of valuable information are rotting in the CERN basements. The situation does not appear to be much better at the Tevatron where the full potential of the collected data has not been explored, and it may never be, either because of theoretical prejudices, or simply because of lack of manpower within the collaborations. Now, what will happen at the LHC? It may well be that new physics will come straight in our faces, and there will never be any doubt what the underlying model is and what are the signals we should analyze. But it may not... Therefore, it would be wise to organize the data such that they could be easily accessed and tested against multiple theoretical interpretations. Since an open access is not realistic at the moment, we would welcome another idea.

Enter RECAST, a semi-automated framework enabling to recycle existing analyses so as to test for alternative signals. The idea goes as follows. Imagine that a collaboration performs a search for a fancy new physics model. In practice, what is searched for is a set of final states particles, say, a pair of muons, jets with unbalanced transverse energy, etc. The same final state may arise in a large class of models, many of which the experimenters would not think of, or which might not even exist at the time the analysis is done. The idea of RECAST is to provide an interface via which theorists or other experimentalists could submit a new signal (simply at the partonic level, in some common Les Houches format). RECAST would run the new signal through the analysis chain, including hadronization, detector simulations and exactly the same kinematical cuts as in the original analysis. Typically, most experimental effort goes into simulating the standard model background, which has already been done by the original analysis. Thus, simulating the new signal and producing limits on the production cross section of the new model would be a matter of seconds. At the same time, the impact of the original analyses could be tremendously expanded.

There is some hope that RECAST may click with experimentalists. First of all, it does not put a lot of additional burden on collaborations. For a given analysis, it only requires a one-time effort of interfacing it into RECAST (and one could imagine that at some point this step could be automatized too). The returns for this additional work would be a higher exposure of the analysis, which means more citations, which means more fame, more job offers, more money, more women... At the same time, RECAST ensures that no infidel hands ever touch the raw data. Finally, RECAST is not designed as a discovery tool, so the collaborations would keep the monopoly on that most profitable part of the business. All in all, lots of profits for a small price. Will it be enough to overcome the inertia? For the moment the only analysis available in the RECAST format is the search for Higgs decaying into 4 tau leptons performed recently by the ALEPH collaboration. For the program to kick off more analyses have to be incorporated. That depends on you....

Come visit the RECAST web page and tell the authors what you think about their proposal. See also another report, more in a this-will-never-happen vein.

Monday, 18 October 2010

Maybe all that exists is the standard model...or even less

Throughout the previous decade Gia Dvali was arguing that there are $10^{32}$ copies of the standard model out there. Now, he made a U-turn and says that there is only 1. Or even less. Let me explain.

The reason why we are pretty sure that we are going to observe new phenomena in the LHC goes under the nickname unitarity of WW scattering. What hides behind this is, technically speakin, that the tree-level scattering amplitude of longitudinally polarized W bosons computed in the standard model without the Higgs particle grows as a square of the scattering energy, and at some point around 1 TeV it becomes inconsistent with unitarity, that is with conservation of probability. In the full standard model this problem is cured: the contribution from the Higgs exchange cancels the dangerously growing terms and the full amplitude is well behaving for arbitrary high energies. A slightly different mechanism is realized in technicolor theories, where the consistent UV behavior of the amplitude is ensured by the exchange of spin-1 resonances.
In spite of 40 years of intensive research we are only aware of these 2 ways of unitarizing the WW amplitude. Thus the LHC should see either the Higgs or new spin-1 resonances. Time will tell which of the 2 possibilities is realized in nature.

A paper last week by Dvali and co. suggests that there may be a 3rd possibility. The authors conjecture that the standard model without a Higgs and without any other embellishments could be a fully consistent theory, even though it appears to be in conflict with unitarity. They argue that the uncontrolled growth of the WW scattering amplitude is just an artifact of the perturbative approximation, while at the non-perturbative level the theory could be completely sane. The idea is that, as the scattering energy increases above TeV, the theory defends itself by producing "large" classical configurations during the scattering process. The higher the energy, we get the larger and more classical objects which then decay preferentially to many-body (rather than 2-body) final states. This way the 2-to-2 WW scattering remains unitary at energies above TeV. The authors, somewhat dully, call this mechanism classicalization. To put it differently, as we increase the scattering energy at some point we stop probing the physics at short distance scales; these small distances are screened from external observers, similar in spirit to black holes screening the short distance physics in transplanckian scattering when gravity is in the game.

If this is the case, what would it mean in practice, that is in experiment? Much as in technicolor, at TeV energies the LHC should observe resonances in WW scattering who ensure the unitarity of the perturbative amplitude in the low-energy effective theory. However, as the scattering energy is increased the resonances become more and more classical and spectacularly decay into many-particle final states. There is no new fundamental degrees of freedom at high energies, no new fundamental forces to discover, just the standard model and its non-perturbative classical dynamics.

Now, can this be true? The paper is rather cryptic, and provides few technical details. In this sense it feels like another emergent gravity. What it demonstrates is that in a class of theories that includes the standard model there exist classical solutions whose large distance behavior only depends on how much energy is sourcing it, and whose size grows in a universal way with the energy. The rest seems to be just words, and there is a long way to proving that classicalization can indeed lead to a fully consistent quantum theory. Nevertheless, given the scarcity of ideas concerning electroweak symmetry breaking, there is definitely some philosophical potential in the paper. We'll see whether it leads to something more concrete...

Update: See also Lubos' stance.

Saturday, 16 October 2010

Back in Town

More than 2 months have passed since my last post. Sorry for these perturbations related to changing continents. I'm about to resume blogging after a few changes and adaptations due to my new environment:
  • One is the cute new banner you must have seen already.
  • Furthermore, the name of this blog has been changed from Resonaances to Résonaances.
  • Seriously ;-)
  • All you fellow bloggers, you should update the name in your blog roll, otherwise you risk being sued by La Commission de la Protection de la Langue Française.
  • Mhm, I actually found out that La Comission does not exist anymore, but one never knows...
  • And all you readers, mind that the pronunciation has changed :-)
  • The subsequent posts will have an abstract in French.
  • Joking, of course. French is perfect for flirting, but not as much for talking science.
  • Consequently, author's name remains Jester, it has *NOT* been changed to Le Bouffon ;-)
The last two months when I was out have been quiet anyway. Dark matter was discovered, again. Higgs was rumored to be seen at the Tevatron, again. Some unexplained events have been seen at the LHC. Just business as usual. These latter rumors should be exponentially growing with each picobarn acquired by the LHC; in case, you know where to ask ;-)

Monday, 9 August 2010

His First Inverse Picobarn

The LHC, more precisely ATLAS, just passed the 1pb-1 milestone:

One inverse picobarn of integrated luminosity is 1/1000 of what is planned for LHC Run I. At the 7 TeV center-of-mass energy this luminosity translates to:
  • 200 000 W bosons,
  • 60 000 Z bosons,
  • 200 top quark pairs,
  • 10-20 Higgs bosons, if bastard's mass is around 120 GeV,
  • A couple of gluino pairs (Poisson permitting) in a parallel universe where gluinos exist and weigh 500 GeV.

Thursday, 5 August 2010

It's...

A new paper entitled It's on is now out on hep-ph. When particle theorists refer to "it" they don't mean sex, unlike ordinary people. Here "it" stands for the LHC who is not only "on" but already produces interesting constraints on new physics. In particular, the latest jets + missing energy search performed by ATLAS excludes a new region of the susy parameter space with a light, 150-300 GeV gluino. One can learn two interesting things from the paper:
  1. 1) that with just 70 nb-1 of LHC data one can obtain non-trivial constraints on vanilla susy models that in some cases are more stringent than the existing Tevatron constraints,
  2. 2) and that the susy combat group in ATLAS missed the point 1.
A gluino is the fermionic partner of the QCD gluon, as predicted by supersymmetry. A pair of gluinos can be produced for example by colliding 2 gluons. Since the protons circulating in the LHC ring are filled to the brim with gluons, gluinos would pop out as pop corn if only they existed and were light enough. For example, a 200 GeV gluino would be produced at the LHC7 with the stunning cross section of 0.6 nb. Thus, even the small amount of LHC data collected so far could contain a few tens of gluino events. Once produced, gluinos immediately decay to standard model and other susy particles (if they don't then the whole story is completely different, and is not covered by the latest ATLAS search). When the gluino is the next-to-lightest susy particle apart from a neutralino then it decays via an off-shell squark into 2 quarks and the neutralino. The signature at the LHC is thus a number of high-pT QCD jets (from the quarks) and large missing energy (from the neutralinos who escape the detector).

There is a lot of jet events at the LHC, but fortunately only a small fraction of them is accompanied by large missing energy. In the 70nb-1 of data, after requiring 40 GeV of missing pT, and with some additional cuts on the jets one finds only four such dijet events, zero 3-jet events, and one 4-jet event. Thus, even a small number of gluinos would have stood out in this sample. The resulting constraints on the gluino vs. neutralino masses are plotted below (the solid black line)
In the region where the mass difference between the gluino and the neutralino is not too large, the LHC constraints beat those from the Tevatron, even though the latter are based on 100000 more luminosity! Obviously, the constraints will get much better soon, as the LHC has already collected almost 10 times more luminosity and doubles the data sample every week.

These interesting constraints were not derived in the original experimental note from ATLAS. Paradoxically, many experimentalists are not enthusiastic about the idea of interpreting the results of collider searches in terms of directly observable parameters such as masses and cross sections. Instead, they prefer dealing with abstract parameters of poorly motivated theoretical constructions such as mSUGRA. In mSUGRA one makes a guess about the masses of supersymmetric particle at the scale $10^{14}$ times higher then the scale at which the experiment is performed, and from that input one computes the masses at low energies. The particular mSUGRA assumptions imply a large mass difference between the gluino and the lightest neutralino at the weak scale. In this narrow strip of parameter space the existing Tevatron searches happen to be more sensitive for the time being.

Saturday, 31 July 2010

CoGeNT dark matter excluded

They say that il n'y a que Paris. This is roughly true, however Paris last week was not the best place in France to learn about the latest dark matter news. Simultaneously to ICHEP'10 in Paris down south in Montpellier there was the IDM conference where most of the dark matter community was present. One especially interesting result presented there concerns the hunt for light dark matter particles.

Some time ago the CoGeNT experiment noted that the events observed in their detector are consistent with scattering of dark matter particles of mass 5-10 GeV. Although CoGeNT could not exclude that they are background, the dark matter interpretation was tantalizing because the same dark matter particle can also fit (with a bit of stretching) the DAMA modulation signal and the oxygen band excess from CRESST.

The possibility that dark matter particles could be so light caught experimenters with their trousers down. Most current experiments are designed to achieve the best sensitivity in the 100 GeV - 1 TeV ballpark, because of prejudices (weak scale supersymmetry) and some theoretical arguments (the WIMP miracle), even though certain theoretical frameworks (e.g asymmetric dark matter) predict dark matter sitting at a few GeV. In the low mass region the sensitivity of current techniques rapidly decreases. For example, experiments with xenon targets detect scintillation (S1) and ionization (S2) signals generated by particles scattering in a detector. Measuring both S1 and S2 ensures very good background rejection, however the scintillation signal is the main showstopper to lowering the detection threshold. Light dark matter particles can give only a tiny push to much heavier xenon atoms, and the experiment is able to collect only a few, if any, resulting scintillation photons. On top of that, the precise number of photons produced at low recoils (described by the notorious Leff parameter) is poorly known, and the subject is currently fiercely debated with knives, guns, and replies-to-comments-on-rebuttals.

It turns out that this debate may soon be obsolete. Peter Sorensen in his talk at IDM argues that xenon experiments can be far more sensitive to light dark matter than previously thought. The idea is to drop the S1 discrimination, and use only the ionization signal. This allows one to lower the detection threshold down to ~1 keVr (while it's order 10 times higher when S1 is include) and gain sensitivity to light dark matter. Of course, dropping S1 also increases background. Nevertheless, thanks to self-shielding, the number of events in the center of the detector (blue triangles on the plot above) is small enough to allow for setting strong limits. Indeed, using just 12.5 day of aged Xenon10 data a preliminary analysis shows that one can improve on existing limits for the dark-matter-nucleon scattering cross section in the 5-10 GeV mass interval:Most interestingly, the region explaining the CoGeNT signal (within red boundaries) seems by far excluded. Hopefully, the bigger and more powerful Xenon100 experiment will soon be able to set even more stringent limits. Unless, of course, they will find something there...

Monday, 26 July 2010

Higgs still at large

Finally, the picture we were dying to see:
Tevatron now excludes the standard model Higgs for masses between 156 and 175 GeV. The exclusion window widened considerably since the last combination. Together with the input from direct Higgs searches at LEP and from electroweak precision observables it means that Higgs is most likely hiding somewhere between 115 and 155 GeV (assuming Higgs exists and has standard model properties). We'll get you bastard, sooner or later.

One interesting detail: Tevatron can now exclude a very light standard model Higgs, below 110 GeV. Just in case LEP screwed ;-) Hopefully, Tevatron will soon start tightening the window from the low mass side.

Another potentially interesting detail: there is some excess of events in the $b \bar b$ channel where a light Higgs could possibly show up. The distribution of the signal-to-background likelihood variable (which is some inexplicably complicated function that mortals cannot interpret) has 5 events in one of the higher s/b bins, whereas only 0.8 are expected. This cannot be readily interpreted as the standard model Higgs signal, as this should also produce events with higher s/b where there is none. Most likely the excess is a fluke, or maybe some problem with background modeling. But it could also be an indication that something weird is going on that does not quite fit the standard model Higgs paradigm. Maybe the upcoming Tevatron publications will provide us with more information.

More details in the slides of the ICHEP'10 talk by Ben Kilminster.

Sunday, 25 July 2010

Monday at ICHEP

This Monday at ICHEP there will be a plenary talk by Nicolas Sarkozy. Like all theorists I'm looking forward to it, as he knows about models much more than we do. You can watch the webcast here, at high noon Paris time.

Saturday, 24 July 2010

D0 says: neither dead nor alive

This year CP violation in the Bs meson system has made the news, including BBC News and American Gardener. The D0 measurement of the same-sign dimuon asymmetry in B decays got by far the largest publicity. Recall that Tevatron's D0 reported 1 percent asymmetry at the 3.1 sigma confidence level, whereas the standard model predicts a much smaller value. The results suggests a new source of CP violation, perhaps new heavy particles that we could later discover at the LHC.

The dimuon asymmetry is not the only observable sensitive to CP violation in the Bs system. Another accessible observable is the CP violating phase in time-dependent Bs decays into the J/ψ φ final state. In principle, the dimuons and J/ψ φ are 2 different measurements that do not have to be correlated. But there are theoretical arguments (though not completely bullet-proof) that a large deviation from the standard model in one should imply a large deviation in the other. This is the case, in particular, if new physics enters via a phase in the dispersive part of the Bs-Bsbar mixing amplitude ($M_{12}$, as opposed to the absorptive part $\Gamma_{12}$), which is theoretically expected if the new particles contributing to that amplitude are heavy. The previous, 2-years old combination of the CDF and D0 measurements displayed an intriguing 2.1 sigma discrepancy with the standard model. CDF updated their result 2 months ago and, disappointingly, the new results is perfectly consistent with the standard model. D0 revealed their update today in an overcrowded room at ICHEP. Here is their new fit to the CP violating phase vs. the width difference of the 2 Bs mass eigenstates
Basically, D0 sees the same 1.5 sigmish discrepancy with the standard model as before. Despite 2 times larger statistics, the discrepancy is neither going away nor decreasing, leaving us children in the dark. Time will tell whether D0 found hints of new sources of CP violation in nature,
or merely hints of complicated systematical effects in their detector.

Friday, 23 July 2010

European Tops at Last!

Today at ICHEP CMS and ATLAS showed their first top candidate events. They see events in both semileptonic and dileptonic channels, with muons and electrons in all combination. Here is one event display in the mu+jets+missing energy channel provided by CMS:

The reconstructed top mass from this event is around 210 GeV, while the latest measurement of the top quark mass from the Tevatron is 173.1 GeV. This is very surprising - naively, one would expect the American top quarks to be heavier ;-)

See more events from Atlas and CMS.

Wednesday, 21 July 2010

Working for a Paycheck

ICHEP'10 is starting tomorrow in Paris. As I told you the other day, I was hired to blog on the highlights of the conference. So for the entire next week I'm planning to scribble a couple of posts per day - an unusual and probably lethal frequency for a lazy blogger accustomed to writing once a month. I guess I will copy&paste the most interesting posts here to Resonaances, but if you're interested in my entire discography you should check out the official ICHEP blog. A bunch of other good fellows writing there, so should be fun.

Friday, 16 July 2010

Muonic Hydrogen and Dark Forces

The measurement of the Lamb shift in the muonic hydrogen has echoed on blogs and elsewhere. Briefly, an experiment at the Paul Scherrer Institute (PSI) measured the energy difference between 2S(1/2) and 2P(3/2) energy levels of an atom consisting of a muon orbiting a proton. Originally, this excercise was intended as a precise determination of the charge radius (that is the size) of the proton: in the muonic hydrogen the finite proton size effect can shift certain energy levels by order one percent, much more than in the ordinary hydrogen, while other contributions to the energy levels are quite precisely known from theory. Indeed, the PSI measurement of the proton charge radius is 10 times more precise than previous measurements based on the Lamb shift in the ordinary hydrogen and on low-energy electron-proton scattering data. Intriguingly, the new result is inconsistent with the previous average at the 5 sigma level.

As usual, when an experimental result is inconsistent with the standard model prediction the most likely explanation is an experimental error or a wrong theoretical calculation. In this particular case the previous experimental data on the proton charge radius do not seem to be rock-solid, at least to a casual observer. For example, if the charge radius is extracted from electron–proton scattering the discrepancy with the PSI measurement becomes only 3.1 sigma;
the PSI paper also quotes another recent measurement that is completely consistent with their result within error bars.

In any case, whenever a discrepancy with the standard model pops up, particle theorists cannot help thinking about new physics explanations. Our folk is notorious for ambulance chasing, but actually this is one of these cases when the ambulance is coming straight at us. Recently the particle community has invested a lot of interest in studies of light, hidden particles very weakly coupled to the ordinary matter. One example is the so-called dark photon: an MeV-GeV mass particle with milli-charge couplings to electrons and muons. This idea is pretty old, but in the past 2 years the interest in dark photons was boosted because their existence could explain certain astrophysical anomalies (Pamela). The signals of dark photons and other hidden particles are now being searched for at the Tevatron, LHC, B-factories, and in dedicated experiments such as ALPS at DESY, or APEX that is just kicking off at JLAB. No signal has been found in these experiments yet, but there is still a lot of room for the dark photon as long as its coupling to electrons and muons is $\epsilon \leq 10^{-3}$ smaller than that of the ordinary photon, see the picture borrowed from this paper. The news of the muonic Lamb shift came somewhat unexpectedly...but not to everyone: here is a passage from a 2-years old paper:
For example, the dark photon contribution to the electron-proton scattering amplitude at low momenta is equivalent to the $6 \epsilon^2 /m_A^2$ correction to the proton charge radius (...) It remains to be seen whether other precision QED tests (e.g. involving muonic atoms) would be able to improve on the current constraints.
So here we are. In the coming weeks we should see whether there exist concrete models capable of fitting all data. In any case, a new front in the battle against dark forces has just been opened. Now, could someone make us a muonium?

Wednesday, 26 May 2010

CDF says: calm down everybody

Physics beyond the standard model has its ups and downs. Ups like mountains in the Netherlands, and downs like the Marianas Trench. Whenever something exciting seems to happen it's the telltale sign that a really big hammer is about to come down.

Last week the D0 experiment at the Tevatron presented the new measurement of the same-sign dimuon charge asymmetry in B-meson decays. This asymmetry probes CP violation in B-mesons, including the $B_s$ mesons that have been less precisely studied than their $B_d$ friends and may still hold surprises in store. D0 claimed that their measurement is inconsistent with the standard model at the 3.2 sigma level and hints to a new physics contribution to the $B_s \bar B_s$ mixing. 3 sigma anomalies in flavor physics are not unheard of, but in this case there were reasons to get excited. One was that the $B_s$ system is a natural place for new physics to show up, because the standard model contribution to the CP-violating mixing phase is tiny, and theoretical predictions are fairly clean. The other reason was that the D0 anomaly seemed to go along well with earlier measurements of CP violation in the $B_s$ system. Namely, the measurement of the $B_s$ decay to $J/\psi \phi$ displayed a 2.1 sigma discrepancy with the standard model, and some claimed the discrepancy is even higher when combined with all other flavor data. In other words, all measurements (except for $B_s \to D_s \mu X$ that however has a larger error) of the phase in the $B_s \bar B_s$ mixing consistently pointed toward new physics.

Not any more. Two days ago D0's rival experiment CDF presented crucial new results at the FPCP conference - a major sabbath of the flavor community. CDF repeated the measurement of the CP violation $B_s \to J/\psi \phi$ on a larger data sample of 5.2 inverse femtobarn, that is with 2 times larger statistics than in the previous measurement. And they see nothing: the result is 0.8 sigma consistent with the standard model.
So at this moment only one experiment claims to see an anomaly in the $B_s$ system, while another measurement of the $B_s \bar B_s$ mixing phase is perfectly consistent with the evil, corrupted standard model. The most likely hypothesis is that D0's result is a fluke and/or systematical uncertainties have been underestimated. Of course, further measurements of the mixing phase may bring another twist to the story...well i dont sound convincing, do I ;-)

Saturday, 22 May 2010

Meanwhile at the LHC

For the time being the most interesting physics results arrive from the Tevatron, as we were reminded this week by D0's announcement. The LHC cannot compete yet, but it's steadily working its way to becoming the leader sometime next year. According to the latest report, things are going pretty smoothly. So far the peak luminosity is $6x10^{28}/cm^2/s$ (corresponding to roughly an inverse picobarn per year), and the goal for the present run is to increase it by a factor of a thousand. Currently the machine people are working on increasing the numbers of protons in the bunches up to the nominal value of $\sim 10^{11}$. This step alone should allow them to reach $2x10^{29}/cm^2/s$ assuming just 2 bunches circulating in the LHC ring. After that, they will progressively add more and more bunches to the beam.
For the moment, the acquired luminosity is around 10 inverse nanobarns per experiment. This means that CMS and ATLAS have already collected almost 1000 W bosons (85 nanobarn cross section), hundreds of Z bosons (25 nanobarn cross section), and a few top quark pairs Poisson permitting (0.2 nanobarn cross section). ATLAS now shows on its public pages the first event displays with leptonically decaying Z bosons. The one reproduced above features a beautiful Z decaying into electrons (the two blobs in the electromagnetic calorimeter). Meanwhile, CMS has no new events on its public pages since the first collisions on March 30. The only logical explanation is that a giant octopus has eaten the detector together with the entire collaboration. As otherwise, if they had anything to share they would share it... or wouldn't they ;-)

Monday, 17 May 2010

New Physics Claim from D0!

Tevatron not dead, or so it seems. Although these days all eyes are turned to the LHC, the old Tevatron is still capable to send the HEP community into an excited state. Last Friday the D0 collaboration presented results of a measurement suggesting the standard model is not a complete description of physics in colliders. The paper is out on arXiv now.

The measurement in question concerns CP violation in B-meson systems, that is quark-antiquark bound states containing one b quark. Neutral B-mesons can oscillate into its own antiparticles and the oscillation probability can violate CP (much as it happens with kaons, although the numbers and the observables are different). There are two classes of neutral B-mesons: $B_d$ and its antiparticle $\bar B_d$ where one bottom quark (antiquark) marries one down antiquark (quark), and $B_s,\bar B_s$ with the down quark replaced by the strange quark. Both these classes are routinely produced Tevatron's proton-antiproton collisions roughly in fifty-fifty proprtions, unlike in B-factories where mostly $B_d,\bar B_d$ have been produced. Thus, the Tevatron provides us with complementary information about CP violation in nature.

There are many final states where one can study B-mesons (far too many, that's why B-physics gives stomach contractions). The D0 collaboration focused on the final states with 2 muons of the same sign. This final state can arise in the following situation. A collision produces a $b \bar b$ quark pair which hadronizes to B and $\bar B$ mesons. Bottom quarks can decay via charged currents (with virtual W boson), and one possible decay channel is $b \to c \mu^- \bar \nu_\mu$. Thanks to this channel, the B meson sometimes (with roughly 10 percent probability) decays to a negatively charged muon, $B \to \mu^- X$, and analogously, the $\bar B$ meson can decay to a positively charged antimuon. However, due to $B \bar B$ oscillations B-mesons can also decay to a "wrong sign" muon: $B \to \mu^+ X$, $\bar B \to \mu^- X$. Thus oscillation allow the $B, \bar B$ pair to decay into two same sign muons a fraction of the times.

Now, in the presence of CP violation the $B \to \bar B$ and $\bar B \to B$ oscillation processes occur with different probabilities. Thus, even though at the Tevatron we start with the CP symmetric initial state, at the end of the day there can be slightly more -- than ++ dimuon final states. To study this effect, the D0 collaboration measured the asymmetry
$A_{sl}^b = \frac{N_b^{++} - N_b^{--}}{N_b^{++} + N_b^{--}}$.
The standard model predicts a very tiny value for this asymmetry, of order $10^{-4}$, which is below the sensitivity of the experiment. This is cool, because simply an observation of the asymmetry provides an evidence for contributions of new physics beyond the standard model.

The measurement is not as easy as it seems because there are pesky backgrounds that have to be carefully taken into account. The dominant background comes from ubiquitous kaons or pions that can sometimes be mistaken for muons. These particles may contribute to the asymmetry because the D0 detector itself violates CP (due to budget cuts the D0bar detector made of antimatter was never constructed). In particular, the kaon K+ happens to travel further than K- in the detector material and may fake a positive value of asymmetry. We have to cross our fingers that D0 got all these effects right and carefully subtracted them away. At the end of the day D0 quotes the measured asymmetry to be
$A_{sl}^b = -0.00957 \pm 0.00251(stat) \pm 0.00146 (syst)$,
that is the number of produced muons is larger than the number of produced antimuons with the statistical significance estimated to be 3.2 sigma. The asymmetry is some 100 times larger than the value predicted by the standard model!

Of course, it's too early to start dancing and celebrating the downfall of the standard model, as in the past the bastard have recovered from similar blows. Yet there are reasons to get excited. The most important one is that the latest D0 result goes well in hand with the anomaly in the $B_s$ system reported by the Tevatron 2 years ago. The asymmetry measured by D0 receives contributions from both $B_s$ and $B_d$ mesons. The $B_d$ mesons are much better studied because they were produced by tons in BaBar and Belle, and to everyone's disappointment they were shown to behave according to the standard model predictions. However BaBar and Belle didn't produce too many $B_s$ mesons (their beams were tuned to the Upsilon(4s) resonance which is a tad too light to decay into $B_s$ mesons), and so the $B_s$ sector can still hold surprises. Two years ago CDF and D0 measured CP violation in $B_s$ decays into $J/\psi \phi$, and they both saw a small, 2-sigma level discrepancy from the standard model. When these 2 results are combined with all other flavor physics data it was argued that the discrepancy becomes more than 3 sigma. The latest D0 results is another strong hint that something fishy is going on in the $B_s$ sector.

Both the old and the new anomaly prompts introducing to the fundamental lagrangian a new effective four-fermion operator that contributes to the amplitude of $B_s \bar B_s$ oscillations:
$L_{new physics} \sim \frac{c}{\Lambda^2}(\bar b s) ^2$ + h.c.,
with a complex coefficient $c$ and the scale in the denominator on the order of 100 TeV. At this point there are no hints from experiment what could be the source of this new operator, and the answer may even lie beyond the reach of the LHC. In any case, in the coming weeks theorists will derive this operator using extra dimensions, little Higgs, fat Higgs, unhiggs, supersymmetry, bricks, golf balls, and old tires. Yet the most important question is whether the asymmetry is real, and we're dying to hear from CDF and Belle. There will be more soon, I hope...

Thursday, 13 May 2010

Official ICHEP what???

Yes, what the say is true: ICHEP2010 has launched an official blog to cover the conference and signed up the cream of the blogosphere (including John Conway, Tommaso Dorigo, Micheal Schmitt). This is going to be an interesting experiment. ICHEP is a bi-annual series conferences with long tradition, probably the largest event in the field of high-energy physics. Blogging, on the other hand, is by many considered a subversive activity to which the most appropriate response is malleus maleficarum. ICHEP's initiative might be the first attempt on this scale to bring together these old and new channels of scientific communication. We'll see what happens...

So, I will be a part of it too (even if one might have expected they would pay me for *not* blogging about ICHEP, given my reputation ;-) July is going to be fun.

Saturday, 1 May 2010

More dark entries

I have another bucketful of dark matter news and gossips, some market fresh, some long overdue. Let me bullet it out, even if each may deserve a separate post.
  • The Xenon100 experiment in Gran Sasso - currently the most sensitive dark matter detection experiment on Earth - is up and running. The results from a short 11 days run in November last year were presented at the WONDER2010 conference a month ago. The signal region where nuclear recoils are supposed to appear is below the blue line. As you can see, bastards really have zero background events. Even this small amount of data allows them to set the limits on the dark matter - nucleon cross section comparable to those obtained by CDMS after many months of running. The experiment is continuously taking data since January and the plan is to run for an entire year. As of today they have roughly 10 times more data on tape, but it's not yet clear when the new chunk will be unblinded and analyzed. Can't wait.
  • Xenon100 can take their time because direct competitors are falling like flies. LUX, a US based experiment that relies on practically the same technology, is stranded until at least next year waiting for their underground cavern to be ready. WARP, a similar experiment next door in Gran Sasso but filled with argon rather than xenon as the target, was aborted last year due to an electrical failure. The latest (unconfirmed) rumor is that XMASS - a 1 ton xenon dark matter experiment in Japan - has been downed due to a simple engineering error. New York City psychics whisper in terror about dark ectoplasm currents sourced somewhere in northern Manhattan.
  • Back to Gran Sasso. CRESST's presentation at WONDER2010 devoted 1 slide to wild speculations about their latest unpublished results on dark matter detection. CRESST uses CaWO4 crystals as the target using and detect scintillation light and phonons to sort out the signal of dark matter recoiling on the nuclei making the crystal. The cool thing about the experiment is that using the light-to-phonon ratio they can to some extent tell whether a nuclear recoil occurred on tungsten or on oxygen. In the tungsten (blue) band, where weak scale dark matter is expected to show up first, there is almost no events. But in the oxygen band (reddish) there is something weird going on. Of course, most likely this is some sort of background that the collaboration has not pinned down yet. But another possible interpretation is that the dark matter particle is very light so that it bounces off heavy tungsten nuclei but still can give a kick to much lighter oxygen nuclei. Furthermore, the slide mentions that the event rate in the oxygen band displays a hint of annual modulation expected from dark matter scattering. Curiouser and curiouser...
  • ...especially if CRESST data are viewed from a somewhat different angle. Juan Collar, apart from being a guest-blogger, has a daytime job at CoGeNT - another dark matter experiment that has recently seen hints of light dark matter particles. A few weeks ago during a workshop in New York Juan flashed the following plot (Content Warning: the plot below makes respectable physicists shout obscenities):
    These are the CRESST data from the tungsten band plotted as the differential recoil spectrum. Naively, the spectrum fits the one expected from light dark matter particles of mass approximately 10 GeV, that is the same ballpark that also fits the CoGeNT data!
  • The situation could be clarified by the CDMS experiment. Although they finished data-taking, they are sitting on a large amount of data collected by their silicon detectors, of which only a part was analyzed and made public (their most recently published limits are based on data from the germanium detectors). Silicon is a fairly light element (A=28) and therefore it is more suitable than germanium for studying light dark matter. Thus CDMS has the potential to exclude the light dark matter interpretation of the CoGeNT and CRESST signals; unfortunately this does not seems to be their priority right now. CRESST itself should release a full-fledged analysis of their data soon, which should provide us with more solid information. However, CRESST at this point is not a background free experiment. Therefore in the nearest future we should expect a wilderness of mirrors rather than clear-cut answers. In other words, more rumors ahead :-)
Update: The paper with first Xenon100 results is now out on arXiv. The analysis chalenges the dark matter interpretation of the CoGeNT data. As you can see on the plot, the region of the parameter space favored by CoGeNT is excluded by Xenon100 at 90% confidence level. One should however note that these limits strongly depend on the quenching factor in xenon (that is how much of recoil energy gets converted into light). Different experimental measurements of that quenching factor point to different trends at low recoil energies (see fig.1 in the Xenon100 paper), which leaves some wiggle room.

Update #2: Just 2 days later Xenon100 gets a smackdown. A new paper by Collar and McKinsey casts doubt whether Xenon100 has any sensitivity to light dark matter particles consistent with the CoGeNT signal. As already hinted, Xenon100's assumptions about the quenching factor at low energies are controversial. Another assumption that is questioned concerns the distribution of the number of photoelectrons near threshold:
...limits depend critically (...) on the assumption of a Poisson tail in the modest number of photoelectrons that would be generated by a light-mass WIMP above detection threshold (...). We question the wisdom of this approach when the mechanisms behind the generation of any significant amount of scintillation are still unknown and may simply be absent at the few keVr level. To put it bluntly, this is the equivalent of expecting something out of nothing.

Tuesday, 27 April 2010

More Trouble with DAMA

I haven't blogged about dark matter for almost 2 months, and already there is a pile of long overdue dark news. This post is about a couple of recent unpublished results that mean trouble for theorists trying interpret the DAMA signal.

Recall that the DAMA experiment has observed a few percent annual modulation of the recoil rate registered by their sodium-iodide crystal detector. This modulation could be due to a change of dark matter flux as the Earth moves around the Sun. However, other dark matter detection experiments (maybe except for CoGeNT) do not observe any signal, which puts strong constraint on the properties of the dark matter particle that could explain all available data. Vanilla-flavor models are by far excluded, however until recently two slightly more involved yet still plausible scenarios appeared marginally allowed:
  1. Weak scale inelastic dark matter. In this scenario a dark matter particle with mass of order 100 GeV scatters to an excited state with order 100 keV mass splitting. The inelastic scenario favors heavy targets (such as DAMA's iodine, A = 127), and enhances the modulation rate (only dark matter particles from the tail of the velocity distribution can scatter, so that small changes of Earth velocity can significantly change the available phase space).
  2. Light (5-10 GeV) elastic dark matter. This scenario favors very light targets (such as DAMA's sodium, A = 23) and experiments with low detection thresholds (such as DAMA's 2 keV), as light dark matter particles cannot give a large push to heavier target nuclei.
A few weeks ago, the former possibility was blasted by CRESST - yet another dark matter experiment under Gran Sasso mountain. CRESST uses CaWO4 crystals as the target, and detects scintillation and phonons to discriminate nuclear recoils (expected from dark matter particle) from alpha, beta, and gamma recoils (induced by ubiquitous backgrounds). The presence of tungsten (A = 184) in their crystal makes it very sensitive to the inelastic scenario. But the latest preliminary results presented at the Wonder2010 conference do not show a clear signal. Although CRESST has a handful of (most likely) background events in the signal band, the number of hits is much smaller than that predicted by the inelastic scenario consistent with the DAMA signal. The collaboration claims that the DAMA region is excluded by more than 3 sigma. This should be treated with a grain of sodium chloride as the CRESST data are not yet public and the assumptions that enter the derivation of the limits are not clearly spelled out in the slides. But most likely, the inelastic window is getting closed.

The explanation of DAMA via a 5-10 GeV dark matter particle is also facing problems. The (marginal) consistency of this scenario with null results from other experiments hinges on the so-called channeling effect in sodium-iodide crystals. Normally, an incoming particle recoiling against the crystal nuclei deposits most of the recoil energy in the form of lattice excitations (not observed by DAMA) while only a small fraction goes into scintillation (observed by DAMA). Channeling refers to the situation when an incoming particle gets caught along the symmetry plane of the crystal undergoing a series of small-angle scatterings and losing most of its energy via scintillation. Since a fraction of less energetic recoils can be detected thanks to channeling, the detection threshold of the experiment is effectively lowered. The effect is especially important for light dark matter because in this case the recoil spectrum is very sharply peaked toward lower energies. The channeling probability reported by DAMA is very large, of order 30 percent in the interesting range of recoil energies, which would greatly increase their sensitivity to light dark matter.

Given its importance you may expect that channeling in sodium-iodide crystals has been carefully studied by the DAMA collaboration. However, DAMA would not be herself if she dwelled on such trivialities. Instead the collaboration estimated the channeling probability using monte carlo simulations based on a theoretical model not applicable for the actual problem. Recently I came across slides from the Snowpac2010 workshop describing an independent attempt to estimate the channeling fraction using more reliable theoretical assumptions. The preliminary results contradict the conclusion of the DAMA collaboration: the channeling probability in sodium-iodine is negligible. If this is right, simple models of light dark matter cannot consistently explain the DAMA oscillation results.

Assuming that both of these preliminary results are true, we are confronted with an embarrassing situation: there is no plausible theoretical interpretation of the DAMA results. What remains on the market are rather exotic models (e.g. resonant dark matter) or Frankenstein models that patch up several non-trivial effects (inelastic+form factor, inelastic+streams, and so on). So theorists need to think harder. At the same time, the need to independently verify the DAMA experimental results becomes even more acute. Maybe a socially sensitive hacker could upload DAMA's raw data on WikiLeaks ;-)

Monday, 12 April 2010

Another Anomaly from CDF

The CDF multimuon anomaly that hit the news 18 months ago is now almost forgotten (though not quite explained away). It appears that Tevatron's CDF results harbor yet another disturbing anomaly. The story goes back to an innocuous measurement of the transverse momentum of charged particles in minimum bias events. In particle physics slang, minimum bias stands for boring, routine measurements. Minimum bias events typically feature soft (low momentum transfer) QCD interactions between colliding hadrons, and normally would not be even recorded on tape because they happen too oftent and do not contain anything we consider interesting (like hard jets, electrons, muons, or missing energy). Only a small random subset of minimum bias events is kept to provide a control sample for tuning monte carlo simulations of hadronic collisions.

The main result of that CDF study is plotted on the right. The dashed red line is a prediction of monte carlo simulations, while the solid green line is just a line drawn over the data points to make us feel secure. As you can see, the data are well described by simulations up to transverse momentum of order 20 GeV. However, around 100 GeV there is a huge, some 3 orders of magnitude discrepancy! The study concludes
...A comparison with a pythia prediction at the hadron
level is performed. The inclusive charged particle differential production cross section is fairly well reproduced only in the transverse momentum range available from previous measurements. At higher momentum the agreement is poor.The dependence of the charged particle transverse momentum on the particle multiplicity needs the introduction of more sophisticated particle production mechanisms, such as multiple parton interactions, in order to be better explained...
that is to say, it's strange but who cares...

Fast forward. A year later a number of theorists began to ponder on the CDF result. Last month, Albino et al. concluded that the discrepancy is too large to be swept under the carpet of theoretical errors, and went as far as suggesting a violation of the QCD factorization theorem. Somewhat later, however, Cacciari et al argued that even such a radical proposal is not a viable explanation. They observed that for pT of order 100 GeV the charged-particle cross section measured by CDF becomes comparable to the jet cross section measured elsewhere. This means that the excess cannot have anything to do with QCD-like events (unless one assumes that jets in this momentum regime contain on average one high-pT charged particle, which is both absurd and inconsistent with measured particle distributions within jets). If the effect is real, the culprit events must be very different from QCD so that they do not affect the measured jet distributions.

Could this be new physics then? High-pT tracks could be left by heavy long lived particles (the likes of charginos or staus in some versions of gauge mediation, or R-hadrons in split supersymmetry).The problem, much as in the case of the CDF multimuon anomaly, is the huge cross section of order tens of nanobarns required to fit the data. Recall that typical models of new physics at the weak scale predict cross sections at least ten thousand times smaller - of order picobarns or less. For example, a 10 nanobarn cross section would correspond to a 20 GeV gluino. It is hard to understand how new physics produced in such large quantities could have escaped detection by multiple searches at the Tevatron. So far, no one has come up with an even remotely viable new physics model explaining the high-pT anomaly from CDF.

At this stage, the most likely explanation is that the anomaly is an experimental error. Maybe a small subset of tracks was misreconstructed so that they appear to have larger momenta than they really have, or maybe a grad student accidentally spilled coffee on the data. Nevertheless, it is mind-boggling that such a large chunk of intriguing data could pass completely unnoticed for so long, just because the discrepancy showed up in a different place than everybody was looking. What else is hiding in 7 inverse femtobarn of data acquired so far by the Tevatron experiments?

Meanwhile at the Tevatron: ...only wind is blowing through deserted corridors full of rubble, broken glass and bird droppings. The humans who used to work here have vanished inside the CERN black hole, only few survivors cower in the Higgs search office. The accelerator is running by sheer inertia spitting out rolls of paper filled with data which pile up in the basements where rats feed on them...