Experimental collaborations display vastly different attitudes toward sharing their data. In my previous post I described an extreme approach bordering on schizophrenia. On the other end of the spectrum is the Fermi collaboration (hail to Fermi). After one year of taking and analyzing data they posted on a public website the energy and direction of every gamma-ray photon they had detected. This is of course the standard procedure for all missions funded by NASA (hail to NASA). Now everybody, from a farmer in the Guangxi province to a professor in Harvard, has a chance to search for dark matter using real data.
The release of the Fermi data has already spawned two independent analyses by theorists. One is being widely discussed on blogs (here and here) and in popular magazines, whereas the other paper passed rather unnoticed. Both papers claim to have discovered an effect overlooked by the Fermi collaboration, and both hint to dark matter as the origin.
The first (chronologically, the second) of the two papers provides a new piece of evidence that the center of our galaxy hosts the so-called haze - a population of hard electrons (and/or positrons) whose spectrum is difficult to explain by conventional astrophysical processes. The haze was first observed by Jimi Hendrix ('Scuse me while I kiss the sky). Later, Doug Finkbeiner came across the haze when analyzing maps of cosmic microwave radiation provided by WMAP; in fact, that was also an independent analysis of publicly released data (hail to WMAP). The WMAP haze is supposedly produced by synchrotron radiation of the electrons. But the same electrons should also produce gamma rays when interacting with the interstellar light in the process known as the inverse Compton scattering (Inverse Compton was the younger brother of Arthur), the ICS in short. The claim is that Fermi has detected these ICS photons. You can even see it yourself if you stare long enough into the picture.
The second paper also takes a look at the gamma rays arriving from the the galactic center, but uncovers a completely different signature. There seems to be a bumpy feature around a few GeV that does not fit a simple power-law spectrum expected from the background. The paper says that a dark matter particle of mass around 30 GeV annihilating into b quark pairs can fit the bump. The required annihilation cross section is fairly low, of order $10^{-25} cm^3/s$, only a factor of 3 larger than that needed to explain the observed abundance of dark matter via a thermal relic. That would put this dark matter particle closer to a standard WIMP, as opposed to the recently popular dark matter particles designed to explain the PAMELA positron excess who need a much larger mass and cross section.
Sadly, collider physics has a long way to go before reaching the same level of openness. Although collider experiments are 100% financed by public funds, and although acquired data have no commercial value, the data remains a property of the collaboration without ever being publicly released, not even after the collaboration has dissolved into nothingness. The only logical reason to explain that is inertia - a quick and easy access to data and analysis tools has only quite recently become available to everybody. Another argument raised on that occasion is that only the collaboration who produced the data is able to understand and properly handle them. That is of course irrelevant. Surely, the collaboration can make any analysis ten times better and more reliably. However, some analyses are simply never done either due to lack of manpower or laziness, and others are marred by theoretical prejudices. The LEP experiment is a perfect example here. Several important searches have never been done because, at the time, there was no motivation from popular theories. In particular, it is not excluded that the Higgs boson exist with a mass accessible to LEP (that is less than 115 GeV), but it was missed because some possible decay channels have not been studied. It may well be that ground breaking discoveries are stored on the LEP tapes rotting on dusty shelves in CERN catacombs. That danger could be easily avoided if the LEP data were publicly available in an accessible form.
In the end, what do we have to lose? In the worst case scenario, the unrestricted access to data will just lead to more entries in my blog ;-)
Update: At the FERMI Symposium this week in Washington the collaboration trashed both of the above dark matter claims.
Friday, 30 October 2009
Tuesday, 27 October 2009
What's really behind DAMA
More than once I wrote in this blog about crazy theoretical ideas to explain the DAMA modulation signal. There is a good excuse. In this decade, DAMA has been the main source of inspiration to extend dark matter model building beyond the simple WIMP paradigm, in particular inelastic dark matter was conceived that way. This in turn has prompted to tighten the net of experimental searches to include signals from non-WIMP dark matter particles. More importantly, blog readers always require sensation, scandal and blood (I know, I'm a reader myself), so that spectacular new physics explanations are always preferred. Nevertheless, prompted by a commenter, I thought it might be useful to balance a bit and describe a more trivial explanation of the DAMA signal that involves a systematic effect rather than dark matter particles.
Unlike most dark matter detection experiments, the DAMA instrument has no sophisticated background rejection (they only reject coincident hits in multiple crystals). That might be an asset, because they are a priori sensitive to a variety of dark matter particles, whether scattering elastically or inelastically, whether scattering on nucleons or electrons, and so on. But at the same time most of their hits comes from mundane and poorly controlled sources such as natural radioactivity, which makes them vulnerable unknown or underestimated backgrounds.
One important source of the background is a contamination of DAMA's sodium-iodine crystals with radioactive elements like Uranium 238, Iodine 129 and Potassium 40. The last one is the main culprit because some of its decay products have the same energy as the putative DAMA signal. Potassium, being in the same Mendeleev column as sodium, can easily sneak into the lattice of the crystal. The radioactive isotope 40K is present with roughly 0.01 percent abundance in natural potassium. Ten percent of the times 40K decays to an excited state of Argon 40, which is followed by a de-excitation photon at 1.4 MeV and emission of Auger electrons with energy 3.2 keV. This process is known to occur in the DAMA detector with a sizable rate; in fact DAMA itself measured that background by looking for coincidences of MeV photons and 3 keV scintillation signals, see the plot above. That same background is also responsible for the little peak at 3keV in the single hit spectrum measured by DAMA, see below (note that this is not the modulated spectrum on which DAMA claim is based!). The peak here is exactly due to the Auger radiation.
Now, look at the spectrum of the time dependent component of the signal where DAMA claims to have found evidence for dark matter. The peak of the annual modulation signal occurs precisely at 3 keV.The fact that the putative signal is on top of the known background is VERY suspicious.
One should admit that it is not entirely clear what could cause the modulation of the background,
although some subtle annual effect affecting the efficiency for detecting the Auger radiation is not implausible. So far, DAMA has not shown any convincing arguments that would exclude 40K as the origin of their modulation signal.
Actually, it is easy to check whether it's 40K or not. Just put one of the DAMA crystals inside the environment where the efficiency for detecting the decay products of 40K is nearly 100 percent. Like for example, in the Borexino balloon that is waiting next door in the Gran Sasso Laboratory. In fact, the Borexino collaboration has made this very proposal to DAMA. The answer was a resounding no.
There is another way Borexino could quickly refute or confirm the DAMA claim. Why not buying the sodium-iodine crystals directly from Saint Gobain - the company that provided the crystals for DAMA? Not so fast. In the contract, DAMA has secured exclusive eternal rights for the use of sodium-iodine crystals produced by Saint Gobain. At this point it comes as no surprise that DAMA threatens legal actions if the company attempts to breach their "intellectual" property.
There is more stories that make hair on your chest stand on end. One often hears the phrase "a very specific collaboration" when referring to DAMA, which is a roundabout way of saying "a bunch of assholes". Indeed DAMA has worked very hard to earn their bad reputation, and sometimes it's difficult to tell whether at the roots is only paranoia or also bad will. The problem, however, is that history of physics has a few examples of technologically or intellectually less sophisticated experiments beating better competitors - take Penzias and Wilson for example.
So we will never know for sure whether the DAMA signal is real or not until it is definitely refuted or confirmed by another experiment. Fortunately, it seems that the people from Borexino have not given up yet. Recently I heard a talk of Cristiano Galbiati who said that the Princeton group is planning to grow their own sodium-iodine crystals. That will take time, but an advantage is that they will be able to obtain better, more radio-pure crystals, and thus reduce the potassium 40 background by many orders of magnitude. So maybe in two years from now the dark matter will be cleared...
Unlike most dark matter detection experiments, the DAMA instrument has no sophisticated background rejection (they only reject coincident hits in multiple crystals). That might be an asset, because they are a priori sensitive to a variety of dark matter particles, whether scattering elastically or inelastically, whether scattering on nucleons or electrons, and so on. But at the same time most of their hits comes from mundane and poorly controlled sources such as natural radioactivity, which makes them vulnerable unknown or underestimated backgrounds.
One important source of the background is a contamination of DAMA's sodium-iodine crystals with radioactive elements like Uranium 238, Iodine 129 and Potassium 40. The last one is the main culprit because some of its decay products have the same energy as the putative DAMA signal. Potassium, being in the same Mendeleev column as sodium, can easily sneak into the lattice of the crystal. The radioactive isotope 40K is present with roughly 0.01 percent abundance in natural potassium. Ten percent of the times 40K decays to an excited state of Argon 40, which is followed by a de-excitation photon at 1.4 MeV and emission of Auger electrons with energy 3.2 keV. This process is known to occur in the DAMA detector with a sizable rate; in fact DAMA itself measured that background by looking for coincidences of MeV photons and 3 keV scintillation signals, see the plot above. That same background is also responsible for the little peak at 3keV in the single hit spectrum measured by DAMA, see below (note that this is not the modulated spectrum on which DAMA claim is based!). The peak here is exactly due to the Auger radiation.
Now, look at the spectrum of the time dependent component of the signal where DAMA claims to have found evidence for dark matter. The peak of the annual modulation signal occurs precisely at 3 keV.The fact that the putative signal is on top of the known background is VERY suspicious.
One should admit that it is not entirely clear what could cause the modulation of the background,
although some subtle annual effect affecting the efficiency for detecting the Auger radiation is not implausible. So far, DAMA has not shown any convincing arguments that would exclude 40K as the origin of their modulation signal.
Actually, it is easy to check whether it's 40K or not. Just put one of the DAMA crystals inside the environment where the efficiency for detecting the decay products of 40K is nearly 100 percent. Like for example, in the Borexino balloon that is waiting next door in the Gran Sasso Laboratory. In fact, the Borexino collaboration has made this very proposal to DAMA. The answer was a resounding no.
There is another way Borexino could quickly refute or confirm the DAMA claim. Why not buying the sodium-iodine crystals directly from Saint Gobain - the company that provided the crystals for DAMA? Not so fast. In the contract, DAMA has secured exclusive eternal rights for the use of sodium-iodine crystals produced by Saint Gobain. At this point it comes as no surprise that DAMA threatens legal actions if the company attempts to breach their "intellectual" property.
There is more stories that make hair on your chest stand on end. One often hears the phrase "a very specific collaboration" when referring to DAMA, which is a roundabout way of saying "a bunch of assholes". Indeed DAMA has worked very hard to earn their bad reputation, and sometimes it's difficult to tell whether at the roots is only paranoia or also bad will. The problem, however, is that history of physics has a few examples of technologically or intellectually less sophisticated experiments beating better competitors - take Penzias and Wilson for example.
So we will never know for sure whether the DAMA signal is real or not until it is definitely refuted or confirmed by another experiment. Fortunately, it seems that the people from Borexino have not given up yet. Recently I heard a talk of Cristiano Galbiati who said that the Princeton group is planning to grow their own sodium-iodine crystals. That will take time, but an advantage is that they will be able to obtain better, more radio-pure crystals, and thus reduce the potassium 40 background by many orders of magnitude. So maybe in two years from now the dark matter will be cleared...
Monday, 5 October 2009
Early LHC Discoveries
It seems that the LHC restart will not be significantly delayed beyond this November. The moment when first protons collide at 7 TeV energy will send particle theorists into an excited state. From day one, we will start harassing our CMS and ATLAS colleagues, begging for a hint of an excess in the data, or offering sex for a glimpse on invariant mass distributions. That will be the case in spite of the very small odds for seeing any new physics during the first months. Indeed, the results acquired so far by the Tevatron make it very unlikely that spectacular phenomena could show up in the early LHC. Although the LHC in the first year will operate at a 3 times larger energy, the Tevatron will have the advantage of 100 times larger integrated luminosity, not to mention the better understanding of their detectors.
Nevertheless, it's fun to play the following game of imagination: what kind of new physics could show up in the early LHC without having been already discovered at the Tevatron? For that, two general conditions have to be satisfied:
The possible couplings of Z' to quarks and leptons can be theoretically constrained: imposing anomaly cancellation, flavor independence, and the absence of exotic fermions at the TeV scale implies that the charges of the new U(1) acts on the standard model fermions as a linear combination of the familiar hypercharge and the B-L global symmetry. Thus, one can describe the parameter space of these Z' models by just three parameters: two couplings gY and gB-L and the Z' mass. This simple parametrization allows us to quickly scan through all possibilities. An example slice of the parameter space for the Z' mass 700 GeV is shown on the picture to the right. The region allowed by the Tevatron searches is painted blue, while the region allowed by electroweak precision tests is pink (the coupling of Z' to the electrons induces effective four-fermion operators that have been constrained by the LEP-II experiment). As you can see, these two constraints imply that both couplings have to be smallish, of order 0.2 at the most, which is even less than the hypercharge coupling g' in the standard model. That in turn implies that the production cross section at the LHC will be suppressed. Indeed, the region where the discovery at the LHC with 7 TeV and 100 inverse picobarns is impossible, marked as yellow, almost fully overlaps with the allowed parameter space. Only a tiny region (red arrow) is left for that particular mass, but even that pathetic scrap is likely to be wiped once the Tevatron updates their Z' analyses.
The above example illustrates how difficult is to cook up a model suitable for an early discovery at the LHC. A part of the reason why Z' is not a good candidate is that it is produced by quark-antiquark collisions. That is a frequent occurrence in the proton-antiproton collider like the Tevatron, whereas at the LHC, who is a proton-proton collider, one has to pay the PDF price of finding an antiquark in the proton. An interesting way out that goes under the name of diquark resonance was proposed in another recent paper. If the new resonance carries the quantum numbers of two quarks (rather than quark-antiquark pair) then the LHC would have a tremendous advantage over the Tevatron, as the resonance could be produced in quark-quark collisions that are more frequent at the LHC. Because of that, a large number of diquark events may be produced at the LHC in spite of the Tevatron constraints. The remaining piece of model building is to ensure that the diquark resonance decays to leptons often enough.
Diquarks are not present in the most popular extensions of the standard model and therefore they might appear to be artificial constructs. However, they can be found in somewhat more exotic models like for example the MSSM with a broken R-parity. That model allows for couplings like $u^c d^c \tilde b^c$, where $u^c,d^c$ are right-handed up and down quarks, while $\tilde b^c$ is the scalar partner of the right-handed bottom quark called the (right) sbottom. Obviously, this coupling violates R-symmetry because it contains only one superparticle (in the standard MSSM, supersymmetric particles couple always in pairs). The sbottom could then be produced by collisions of up and down quarks, both of which are easy to find in protons.
Decays of the sbottom are very model dependent: the parameter space of supersymmetric theories is as good as infinite and can accommodate numerous possibilities. Typically, the sbottom will undergo a complex cascade decay that may or may not involve leptons. For example, if the lightest supersymmetric particle is the scalar partner of the electron, then the sbottom can decay into a bottom quark + a neutralino who decays into an electron + a selectron who finally decays into an electron and 3 quarks:
$\tilde b^c -> b \chi^1 -> b e \tilde e -> b e e j j j$
As a result, the LHC would observe two hard electrons plus a number of jets in the final state, something that should not be missed.
To wrap up, the first year at the LHC will likely end up being an "engineering run", where the standard model will be "discovered" to the important end of calibrating the detectors. However, if the new physics is exotic enough, and the stars are lucky enough, then there might be some real excitement store.
Nevertheless, it's fun to play the following game of imagination: what kind of new physics could show up in the early LHC without having been already discovered at the Tevatron? For that, two general conditions have to be satisfied:
- There has to be a resonance coupled to the light quarks (so that it can be produced at the LHC with a large enough cross section) whose mass is just above the Tevatron reach, say 700-1000 GeV (so that the cross section at the Tevatron, but not at the LHC, is kinematically suppressed).
- The resonance has to decay to electrons or muons with a sizable branching fraction (so that the decay products can be seen in relatively clean and background-free channels).
The possible couplings of Z' to quarks and leptons can be theoretically constrained: imposing anomaly cancellation, flavor independence, and the absence of exotic fermions at the TeV scale implies that the charges of the new U(1) acts on the standard model fermions as a linear combination of the familiar hypercharge and the B-L global symmetry. Thus, one can describe the parameter space of these Z' models by just three parameters: two couplings gY and gB-L and the Z' mass. This simple parametrization allows us to quickly scan through all possibilities. An example slice of the parameter space for the Z' mass 700 GeV is shown on the picture to the right. The region allowed by the Tevatron searches is painted blue, while the region allowed by electroweak precision tests is pink (the coupling of Z' to the electrons induces effective four-fermion operators that have been constrained by the LEP-II experiment). As you can see, these two constraints imply that both couplings have to be smallish, of order 0.2 at the most, which is even less than the hypercharge coupling g' in the standard model. That in turn implies that the production cross section at the LHC will be suppressed. Indeed, the region where the discovery at the LHC with 7 TeV and 100 inverse picobarns is impossible, marked as yellow, almost fully overlaps with the allowed parameter space. Only a tiny region (red arrow) is left for that particular mass, but even that pathetic scrap is likely to be wiped once the Tevatron updates their Z' analyses.
The above example illustrates how difficult is to cook up a model suitable for an early discovery at the LHC. A part of the reason why Z' is not a good candidate is that it is produced by quark-antiquark collisions. That is a frequent occurrence in the proton-antiproton collider like the Tevatron, whereas at the LHC, who is a proton-proton collider, one has to pay the PDF price of finding an antiquark in the proton. An interesting way out that goes under the name of diquark resonance was proposed in another recent paper. If the new resonance carries the quantum numbers of two quarks (rather than quark-antiquark pair) then the LHC would have a tremendous advantage over the Tevatron, as the resonance could be produced in quark-quark collisions that are more frequent at the LHC. Because of that, a large number of diquark events may be produced at the LHC in spite of the Tevatron constraints. The remaining piece of model building is to ensure that the diquark resonance decays to leptons often enough.
Diquarks are not present in the most popular extensions of the standard model and therefore they might appear to be artificial constructs. However, they can be found in somewhat more exotic models like for example the MSSM with a broken R-parity. That model allows for couplings like $u^c d^c \tilde b^c$, where $u^c,d^c$ are right-handed up and down quarks, while $\tilde b^c$ is the scalar partner of the right-handed bottom quark called the (right) sbottom. Obviously, this coupling violates R-symmetry because it contains only one superparticle (in the standard MSSM, supersymmetric particles couple always in pairs). The sbottom could then be produced by collisions of up and down quarks, both of which are easy to find in protons.
Decays of the sbottom are very model dependent: the parameter space of supersymmetric theories is as good as infinite and can accommodate numerous possibilities. Typically, the sbottom will undergo a complex cascade decay that may or may not involve leptons. For example, if the lightest supersymmetric particle is the scalar partner of the electron, then the sbottom can decay into a bottom quark + a neutralino who decays into an electron + a selectron who finally decays into an electron and 3 quarks:
$\tilde b^c -> b \chi^1 -> b e \tilde e -> b e e j j j$
As a result, the LHC would observe two hard electrons plus a number of jets in the final state, something that should not be missed.
To wrap up, the first year at the LHC will likely end up being an "engineering run", where the standard model will be "discovered" to the important end of calibrating the detectors. However, if the new physics is exotic enough, and the stars are lucky enough, then there might be some real excitement store.
Subscribe to:
Posts (Atom)