As for today, only the DAMA experiment in Gran Sasso claims to have detected dark matter particles. The claim is based on observing the annual modulation of the number of scattering events in DAMA's sodium-iodine detector. Such an effect could arise due the motion of the Earth around the Sun that implies the annual variation of the Earth's velocity with respect to the sea of dark matter pervading our galaxy.
The experimental community is divided about DAMA. One half considers them ignorants who have no idea what they're doing, whereas the other half thinks that they deliberately rigged their results. Theorists, on the other hand, are by construction more open-minded (or maybe just bored) and they sometimes entertain the possibility that the DAMA signal might actually be dark matter. The challenge is then to explain why other, in principle more sensitive detection techniques have yielded null results. There has been several, less or more contrived proposals to reconcile DAMA with the stringent limits from other direct detection experiments like CDMS, XENON, CRESST, ZEPLIN and KIMS. The DAMA signal can be explained by the standard WIMP dark mater scattering on the sodium atoms if the dark matter particle has a fairly small mass of order 5 GeV (although there is some controversy about this interpretation). This post is about another scenario called inelastic dark matter, iDM in short. It was originally proposed quite some time ago, but recently it is becoming more and more fashionable.
A typical WIMP particle scatters elastically on the target nucleons, that is to say, it retains its identity in the process. In the iDM scenario, on the other hand, the cross section for elastic scattering is assumed to be suppressed. Instead, the dark matter particle scatters inelastically into a slightly heavier partner. If the mass splitting between the two dark matter particles is of order 100 keV - the typical kinetic energy in the dark matter sea - the DAMA signal can be, with a bit of luck, reconciled with the bounds from other experiments.
The way it works is the following. All direct detection experiments attempt to measure the recoil energy of a nucleon that has been hit by a passing dark matter particle. In the iDM scenario, the minimal velocity of the incoming dark matter particle needed to produce the recoil $E_R$ is given by the formula
$v_{min} = \frac{\delta+ m_N E_R/\mu_N }{\sqrt{2 m_N E_R}}$,
where $\mu_N$ is the reduced mass of the dark matter + nucleon system and $\delta$ is the mass splitting between the two dark matter states. As long as the splitting term dominates, heavier targets require lower velocity to give them a kick. DAMA's target contains pretty heavy iodine (A=127) (as compared to CDMS germanium with A=73). The sea of dark matter is expected to have the Maxwellian distribution of velocities that rapidly fall above the peak velocity which is of order $v \sim 0.001$, so that even a small change of the minimal velocity may significantly affect the number of events. Also for that reason, the oscillation signal studied by DAMA is enhanced, because the small summer/winter variation of the dark matter velocity distribution (in the Earth reference frame) may lead to a large variation of the signal. All in all, there remains some allowed parameter space, as can be seen in the example plot borrowed from this paper. For a fixed dark matter mass, the DAMA region in the mass splitting - cross section plane is marked in magenta, while black lines are the current bounds, the most stringent coming from CDMS (solid) and CRESST (dashed).
There is also a purely sociological reason why the bounds from other experiments get relaxed: iDM has not really been searched for...The nature of iDM leads to a very peculiar nucleon recoil spectrum. Whereas for the standard WIMP the number of events grows exponentially at low recoil energies, the recoil spectrum in the iDM scenario is suppressed at low energies and displays a "resonant" shape. Most experiments derive their bounds assuming the standard recoil spectrum and they do not optimize their search strategies to probe non-standard scenarios. For this reason, the idea of iDM is relevant for dark matter searches irrespectively of DAMA. It is a phenomenologically distinct possibility that should be taken into account, and one may easily miss the Nobel prize by restricting to the standard WIMP paradigm.
From the theoretical point of view, models of iDM are not difficult to write down. One simple possibility is the dark matter particle being a Dirac fermion with a large mass of order 100 GeV spiced up by a small 100 keV Majorana mass. The later leads to the required splitting between the two Majorana mass eigenstates. Furthermore, if the Dirac fermion has vector interactions the vector couples non-diagonally in the eigenstate basis, and the elastic scattering is suppressed with respect to the inelastic one. Another simple realization of iDM is a complex scalar whose two real components are split by a small "holomorphic" mass term. There is no obstacles to embed iDM into mainstream theories beyond the Standard Model. For example, in the MSSM, the Standard Model neutrino is partnered by a sneutrino who is a complex scalar, and the mass splitting could originate from a small lepton-violating term $(L H)^2$ in the superpotential.
So, just keep our fingers crossed while waiting for the new results from CRESST, XENON-100, LUX, KIMS and many others.
See also this post on Dirac Sea.
Thursday, 23 April 2009
Friday, 10 April 2009
Inconstant
The funniest April Fools prank was definitely the one about time variation of $\pi$. That idea is of course absurd because the Bible unambiguously sets the value of $\pi$ to be equal three. But the physical constants like the QCD scale or the Fermi constant are not mentioned in the Bible which suggests that they might not be constants. Recently, Harald Fritzsch put on ArXiv a neat status report of various theoretical and experimental pursuits of varying fundamental constants.
For almost a century the idea of varying fundamental constants has been attracting most brilliant minds and complete crackpots alike. At the theoretical level the mechanism is easy to imagine: the physical constants can be set by a vacuum expectation value of a scalar field that evolves on cosmological timescales. In high-energy theory we already have one evolving scalar field for inflation and sometimes another one for quintessence, so that introducing yet another one for varying constants is not that difficult to swallow.
At the beginning of this century the idea has received renewed attention due to some experimental claims that the electromagnetic constant $\alpha$ may vary in time. A group of astrophysicists studying absorption spectra of very distant quasars concluded that 10 billion years ago $\alpha$ was smaller than today by $\Delta \alpha/\alpha \sim 10^{-5}$, corresponding to a time variation of order $10^{-15}$ per year. This claim is very controversial because of various assumptions involved in the determination $\alpha$ and, most of all, because other groups did not confirm this result. A more recent claim that the proton-to-electron mass ratio was different 10 billion years ago also remains highly controversial.
Yet another reason why the above claims are taken with a huge grain of salt is that the so-called Oklo bounds imply a slower variation of $\alpha$. 2 billion years ago, when the Earth was young and beautiful, the uranium-235 isotope was five times more abundant than today. Thanks to that fact and some other lucky coincidences, near the river Oklo in today's Gabon nature could create a fully organic nuclear reactor which operated for 100 million years. The uranium fission produced many rare isotopes, and the particular ratio of Samarium-149 to Samarium-147 can be used to constrain variation of the fundamental constants. The point is that the cross-section for the neutron capture on Samarium 149 is accidentally enhanced by a presence of resonance just 0.1 eV above the threshold. From the fact that the position of this resonance could not migrate by more than 0.1 eV one can set the bound $\Delta \alpha/\alpha \sim 10^{-7}$ (assuming that only the electromagnetic constant is varied) corresponding to a time variation $10^{-16}$ per year. If $\alpha$ was changing faster than that (as suggested by some astrophysical results) it had to stabilize at least two billion years ago.
In the neat future there is hope for more progress from precision measurements in a controlled laboratory environment. Experiments in quantum optics have recently reached a similar sensitivity to varying constants as the astrophysical observations. In particular, Theodor Haensch's group in Munich is running an experiment that studies time variation of the frequency of the 1s-2s transition in atom hydrogen (review here). The measurements from different years are related to the hyperfine transitions of Cesium-133 and to another precision measurement of quadrupole transitions in Mercury, which allows them to constrain the variation of both the electromagnetic constant and the QCD scale. The results published several years ago constrain the variation of both at the level of few times $10^{-15}$ per year.
Actually, Harald Fritzsch is spreading wild rumors that the most recent results from Munich imply the time variation of the QCD scale at the level of $3 \cdot 10^{-15}$ per year. Well, I'd rather bet that at the end of the day the constants will once more turn out to be constants. But who knows...in the end the Hubble constant has changed since the nucleosynthesis by some 17 orders of magnitude.
For almost a century the idea of varying fundamental constants has been attracting most brilliant minds and complete crackpots alike. At the theoretical level the mechanism is easy to imagine: the physical constants can be set by a vacuum expectation value of a scalar field that evolves on cosmological timescales. In high-energy theory we already have one evolving scalar field for inflation and sometimes another one for quintessence, so that introducing yet another one for varying constants is not that difficult to swallow.
At the beginning of this century the idea has received renewed attention due to some experimental claims that the electromagnetic constant $\alpha$ may vary in time. A group of astrophysicists studying absorption spectra of very distant quasars concluded that 10 billion years ago $\alpha$ was smaller than today by $\Delta \alpha/\alpha \sim 10^{-5}$, corresponding to a time variation of order $10^{-15}$ per year. This claim is very controversial because of various assumptions involved in the determination $\alpha$ and, most of all, because other groups did not confirm this result. A more recent claim that the proton-to-electron mass ratio was different 10 billion years ago also remains highly controversial.
Yet another reason why the above claims are taken with a huge grain of salt is that the so-called Oklo bounds imply a slower variation of $\alpha$. 2 billion years ago, when the Earth was young and beautiful, the uranium-235 isotope was five times more abundant than today. Thanks to that fact and some other lucky coincidences, near the river Oklo in today's Gabon nature could create a fully organic nuclear reactor which operated for 100 million years. The uranium fission produced many rare isotopes, and the particular ratio of Samarium-149 to Samarium-147 can be used to constrain variation of the fundamental constants. The point is that the cross-section for the neutron capture on Samarium 149 is accidentally enhanced by a presence of resonance just 0.1 eV above the threshold. From the fact that the position of this resonance could not migrate by more than 0.1 eV one can set the bound $\Delta \alpha/\alpha \sim 10^{-7}$ (assuming that only the electromagnetic constant is varied) corresponding to a time variation $10^{-16}$ per year. If $\alpha$ was changing faster than that (as suggested by some astrophysical results) it had to stabilize at least two billion years ago.
In the neat future there is hope for more progress from precision measurements in a controlled laboratory environment. Experiments in quantum optics have recently reached a similar sensitivity to varying constants as the astrophysical observations. In particular, Theodor Haensch's group in Munich is running an experiment that studies time variation of the frequency of the 1s-2s transition in atom hydrogen (review here). The measurements from different years are related to the hyperfine transitions of Cesium-133 and to another precision measurement of quadrupole transitions in Mercury, which allows them to constrain the variation of both the electromagnetic constant and the QCD scale. The results published several years ago constrain the variation of both at the level of few times $10^{-15}$ per year.
Actually, Harald Fritzsch is spreading wild rumors that the most recent results from Munich imply the time variation of the QCD scale at the level of $3 \cdot 10^{-15}$ per year. Well, I'd rather bet that at the end of the day the constants will once more turn out to be constants. But who knows...in the end the Hubble constant has changed since the nucleosynthesis by some 17 orders of magnitude.
Thursday, 2 April 2009
Dark Matter more like Baryons
April Fools is over; I'm staying dead serious for the rest of the year. The most serious things in the months before the first LHC results are dark matter searches high and low. Here is another idea what they might find.
The most popular scenario for dark matter assumes that it consists of weakly interacting massive particles (WIMPs) who were once in thermal equilibrium. In the early hot and dense universe such a particle can efficiently annihilate into familiar particles like photon or electrons, and in this way dark matter is kept in equilibrium with the the rest of the cosmic plasma. The equilibrium ceases to hold when the temperature T of the universe falls below the dark matter particle mass M. In that regime, the number of dark matter particle very quickly decreases - as an exponential $e^{-M/T}$ - and at some point dark matter freezes out: there isn't enough dark matter particles around that they could find each other and annihilate. The surviving particles float around in the universe playing hide and seek with astronomers and physicists alike.
The WIMP scenario is nice and robust but it sheds little light on the surprising fact that the present abundanceof dark matter $\Omega_{DM}$ is very close to that of the ordinary matter who is today dominated by the baryon (proton and neutrons) abundance $\Omega_{B}$. After WMAP data we are confident that the ratio $\Omega_{DM}/\Omega_B$ is roughly five. Of course, one can always cook up the parameters of the WIMP model such that this constraint is satisfied, but nevertheless the proximity of $\Omega_B$ and $\Omega_{DM}$ is intriguing. It may suggests that baryons and dark matter have a common origin. But baryons are definitely NOT a cold relic!
In fact, we don't know for sure what is the origin of baryons in our universe but we have a bunch of ideas that go under the name of baryogenesis. The general idea is that the very early universe contains an equal number of baryons and antibaryons, but at some point in its evolution the fundamental interactions in the plasma produce a tiny $10^{-10}$ asymmetry between matter and antimatter. Once the temperature falls below the baryon mass most of the baryons and anti-baryons annihilate with each other and turn into the sea of photons, leaving only the small unpaired $10^{-10}$ fraction of baryons. These are the protons and neutrons that make galaxies and stars today.
Is it conceivable that dark matter originates in a similar fashion? That is to say, the early universe contains dark matter and anti dark matter particles which almost completely annihilate away leaving only the small asymmetric fraction? Can the dark asymmetry and the baryon asymmetry have the common origin? The answer to these questions is yes, and the first practical realization I'm aware of is due to David B. Kaplan in early nineties. In that model, the dark matter particle carries a charge under an additional U(1) global symmetry who, much as the U(1) baryon symmetry, has a mixed anomaly with the electroweak SU(2) gauge symmetry. Because of the anomalies, non-perturbative electroweak interactions that are effective in the early universe violate both the baryon number and the dark matter number. Then, if some conditions are satisfied, the electroweak phase transition generates the baryon and the dark asymmetries roughly of the same order. At the end of the day on obtains the relation $\Omega_{DM}/\Omega_B \sim m_{DM}/m_{proton}$ and the experimentally measured ratio is recovered if the dark matter particle's mass is around 5 GeV.
Kaplan's original model is long gone for several reasons, but the idea is still floating in the backchannels of model building. The most recent approach in the context of supersymmetry was made by David E. Kaplan et al (David E. Kaplan is a more recent version of David B. Kaplan with more features). In that model, there's no new quantum number invented especially for the dark matter particle; instead, it carries the lepton (or baryon in another version) quantum number. Furthermore, the model does not rely on electroweak baryogenesis but rather it assumes that the the B - L asymmetry is generated at high energies (for example by leptogenesis). That asymmetry is later redistributed between baryons and dark matter by higher-dimensional interactions. When these interactions fall out of equilibrium, dark matter asymmetry is frozen in and one again ends up with $\Omega_{DM}/\Omega_B \sim m_{DM}/ m_{proton}$. All that remains is to find 5-15 GeV dark matter in the sky, or in colliders or by direct detection...
The most popular scenario for dark matter assumes that it consists of weakly interacting massive particles (WIMPs) who were once in thermal equilibrium. In the early hot and dense universe such a particle can efficiently annihilate into familiar particles like photon or electrons, and in this way dark matter is kept in equilibrium with the the rest of the cosmic plasma. The equilibrium ceases to hold when the temperature T of the universe falls below the dark matter particle mass M. In that regime, the number of dark matter particle very quickly decreases - as an exponential $e^{-M/T}$ - and at some point dark matter freezes out: there isn't enough dark matter particles around that they could find each other and annihilate. The surviving particles float around in the universe playing hide and seek with astronomers and physicists alike.
The WIMP scenario is nice and robust but it sheds little light on the surprising fact that the present abundanceof dark matter $\Omega_{DM}$ is very close to that of the ordinary matter who is today dominated by the baryon (proton and neutrons) abundance $\Omega_{B}$. After WMAP data we are confident that the ratio $\Omega_{DM}/\Omega_B$ is roughly five. Of course, one can always cook up the parameters of the WIMP model such that this constraint is satisfied, but nevertheless the proximity of $\Omega_B$ and $\Omega_{DM}$ is intriguing. It may suggests that baryons and dark matter have a common origin. But baryons are definitely NOT a cold relic!
In fact, we don't know for sure what is the origin of baryons in our universe but we have a bunch of ideas that go under the name of baryogenesis. The general idea is that the very early universe contains an equal number of baryons and antibaryons, but at some point in its evolution the fundamental interactions in the plasma produce a tiny $10^{-10}$ asymmetry between matter and antimatter. Once the temperature falls below the baryon mass most of the baryons and anti-baryons annihilate with each other and turn into the sea of photons, leaving only the small unpaired $10^{-10}$ fraction of baryons. These are the protons and neutrons that make galaxies and stars today.
Is it conceivable that dark matter originates in a similar fashion? That is to say, the early universe contains dark matter and anti dark matter particles which almost completely annihilate away leaving only the small asymmetric fraction? Can the dark asymmetry and the baryon asymmetry have the common origin? The answer to these questions is yes, and the first practical realization I'm aware of is due to David B. Kaplan in early nineties. In that model, the dark matter particle carries a charge under an additional U(1) global symmetry who, much as the U(1) baryon symmetry, has a mixed anomaly with the electroweak SU(2) gauge symmetry. Because of the anomalies, non-perturbative electroweak interactions that are effective in the early universe violate both the baryon number and the dark matter number. Then, if some conditions are satisfied, the electroweak phase transition generates the baryon and the dark asymmetries roughly of the same order. At the end of the day on obtains the relation $\Omega_{DM}/\Omega_B \sim m_{DM}/m_{proton}$ and the experimentally measured ratio is recovered if the dark matter particle's mass is around 5 GeV.
Kaplan's original model is long gone for several reasons, but the idea is still floating in the backchannels of model building. The most recent approach in the context of supersymmetry was made by David E. Kaplan et al (David E. Kaplan is a more recent version of David B. Kaplan with more features). In that model, there's no new quantum number invented especially for the dark matter particle; instead, it carries the lepton (or baryon in another version) quantum number. Furthermore, the model does not rely on electroweak baryogenesis but rather it assumes that the the B - L asymmetry is generated at high energies (for example by leptogenesis). That asymmetry is later redistributed between baryons and dark matter by higher-dimensional interactions. When these interactions fall out of equilibrium, dark matter asymmetry is frozen in and one again ends up with $\Omega_{DM}/\Omega_B \sim m_{DM}/ m_{proton}$. All that remains is to find 5-15 GeV dark matter in the sky, or in colliders or by direct detection...
Wednesday, 1 April 2009
April Fools'09: No Higgs particle after all?
While the LHC, after initial difficulties, is on the straight path for first collisions the particle physics community has been given yet another bitter pill to swallow. Yesterday at CERN Peter Higgs gave a seminar whose message was really shocking. No more no less, Higgs demonstrated that the famous particle that carries his name cannot exist! He pointed out a mistake in his original '64 paper which did not take into account the topological anomalies of the symmetry group of the Standard Model. He also presented a concise and elegant proof that the existence of the Higgs particle would lead to an instability of the vacuum.
In reply to angry voices from the audience, Higgs said:
In reply to angry voices from the audience, Higgs said:
Yes i got it wrong back then. I mean, i got it right at first, but then the referee confused me and I added the particle to get the paper published quickly.Why did Higgs wait more than 40 years to correct his mistake? He explained:
You know, I was flattered - not everybody has his own particle. I thought that sooner or later someone else would point out the mistake anyway. But now there's so much talking of that particle at the LHC that I had to come out to prevent greater disappointment.Cern theorist John Ellis commented:
This seems unbelievable, but the math is there on the blackboard... I'm afraid Higgs is right this time. The whole story demonstrates how important is to independently verify scientific results rather than to follow fashion. We have learned our lesson.But is it not too late? If the Higgs particle is not out there, does this make the 5 billion worth LHC accelarator a useless toy? CERN director general Rolf Heuer carefully chooses his words:
One can never predict the course of scientific developments . Important discoveries may arise as side effects of the LHC program, as it happened before with the World Wide Web.But even in these grim circumstances there are some who see the glass half-full. As someone from the audience pointed out:
At least the Tevatron won't have it either...Happy April Fools everyone :-)