These days, cosmologists, astrophysicists and all that lot fill every nook and crannie of CERN TH. They also fill the seminar schedule with their deep dark matter talks. I have no choice but to make another dark entry in this blog. Out of 10^6 seminars i've heard this week i pick up the one by Marco Cirelli about Minimal Dark Matter.
The common approach to dark matter is to obtain a candidate particle in a framework designed to solve some other problem of the standard model. The most studied example is the lightest neutralino in the MSSM. In this case, the dark matter particle is a by-product of a theory whose main motivation is to solve the hierarchy problem. This kind of attitiude is perfectly understandable from the psychological point of view. By the same mechanism, a mobile phone sells better if it also plays mp3s, makes photographs and sings lullabies.
But after all, the only solid evidence for the existence of physics beyond the standard model is the observation of dark matter itself. Therefore it seems perfectly justified to construct extensions of the standard model with the sole objective of accommodating dark matter. Such an extension explains all current observations while avoiding the excess baggage of full-fledged theoretical frameworks like supersymmetry. This is the logic behind the model presented by Marco.
The model is not really minimal (adding just a scalar singlet would be more minimal), but it is simple enough and cute. Marco adds one scalar or one Dirac fermion to the standard model, and assigns it a charge under SU(2)_L x U(1)_Y. The only new continuous parameter is the mass M of the new particle. In addition, there is a discrete set of choices of the representation. The obvious requirement is that the representation should contain an electrically neutral particle, which could play the role of the dark matter particle. According to the formula Q = T3 + Y, we can have an SU(2) doublet with the hypercharge Y= 1/2, or a triplet with Y = 0 or Y = 1, or larger multiplets.
Having chosen the representation, one can proceed to calculating the dark matter abundance. In the early universe, the dark matter particles thermalize due to their gauge interactions with W and Z gauge bosons. The final abundance depends on the annihilation cross section, which in turn depends on the unknown mass M and the well known standard model gauge couplings. Thus, by comparing the calculated abundance with the observed one, we can fix the mass of the dark matter particle. Each representation requires a different mass to match the observations. For example, a fermion doublet requires M = 1 TeV, while for a fermion quintuplet with Y = 0 we need M = 10 TeV.
After matching to observations, the model has no free parameters and yields quite definite predictions. For example, here is the prediction for the direct detection cross section:We can see that the cross sections are within reach of the future experiments. The dark matter particle, together with its charged partners in the SU(2) multiplet, could also be discovered at colliders (if M is not heavier than a few TeV) or in the cosmic rays. There are the usual indirect detection signals as well.
The model was originally introduced in a 2005 paper. The recent paper corrects the previous computation of dark matter abundance by including the Sommerfeld corrections.
Thursday, 12 July 2007
Thursday, 5 July 2007
Countdown to Planck
These days the CERN Theory Institute program is focused on the interplay between cosmology and LHC phenomenology. Throughout July you should expect overrepresentation of cosmology in this blog. Last Wednesday, Julien Lesgourgues talked about the Planck satellite. Julien is worth listening to. First of all, because of his cute French accent. Also, because his talks are always clear and often damn interesting. Here is what he said this time.
Planck is a satellite experiment to measure the Cosmic Microwave Background. It is the next step after the succesful COBE and WMAP missions. Although it looks like any modern vacuum cleaner, the instruments offer 2*10^(-6) resolution of temperature fluctuations (factor 10 better than WMAP) and 5' angular resolution (factor 3 better than WMAP). Thanks to that, Planck will be able to measure more precisely the angular correlations of the CMB temperature fluctuations, especially at higher multipoles (smaller angular scales). This is illustrated on this propaganda picture:
Even more dramatic is the improvement in measuring the CMB polarization. In this context, one splits the polarization into the E-mode and the B-mode (the divergence and the curl). The E-mode can be seeded by scalar gravitational density perturbations which are responsible for at least half of the already observed amplitude of temperature fluctuations. For large angular scales, the E-mode has already been observed by WMAP. The B-mode, on the other hand, must originate from tensor perturbations, that is from gravity waves in the early universe. These gravity waves can be produced by inflation. Planck will measure the E-mode very precisely, while the B-mode is a chalenge. Observing the latter requires quite some luck, since many models of inflation predict the B-mode well below the Planck sensitivity.
Planck is often described as the ultimate CMB temperature measerument. That is because its angular resolution corresponds to the minimal one at which temperature fluctuations of cosmological origin may exist at all. At scales smaller than 5' the cosmological imprint in the CMB is suppresed by the so-called Silk damping. 5' corresponds roughly to the photon mean free path in the early univere so that fluctuations at smaller scales get washed out. However, there is still room for future missions to improve the polarization measurements.
All these precision measurements will serve the noble cause of precision cosmology, that is a precise determination of the cosmological parameters. Currently, the CMB and other data are well described by the Lambda-CDM model, which has become the standard model of cosmology. Lambda-CDM has 6 adjustable parameters. One is the Hubble constant. The other two are the cold (non-relativistic) dark matter and the baryonic matter densities. In this model matter is supplemented by the cosmological constant, so as to end up in the spatially flat universe. Another two parameters describe the spectrum of gravitational perturbations (the scalar amplitude and the spectral index). The last one is the optical depth to reionization. Currently, we know these parametes with a remarkable 10% accuracy. Planck will further improve the accuracy by a factor 2-3, in most cases.
Of course, Planck may find some deviations from the Lambda-CDM model. There exist, in fact, many reasonable extensions that do not require any exotic physics. For example, there may be the already mentioned tensor perturbations, non-gaussianities or the running of the spectral index, which are predictions of certain models of inflation. Planck could find the trace of the hot (relativistic) component of the dark matter. Such contribution might come from the neutrinos, if the sum of their masses is at least 0.2 eV. Furthermore, Planck will accurately test the spatial flatness assumption. The most exciting discovery would be to see that the equation of state of dark energy differs from w=-1 (the cosmological constant). This would point to some dynamical field as the agent responsible for the vacuum energy.
Finally, the Planck will test models of inflation. Although it is unlikely that the measurement will favour one particular model, it may exclude large classes of models. There are two parameters that appear most interesting in this context. One is the spectral index nS. Inflation predicts small departures from the scale invariant Harrison-Zeldovich spectrum corresponding to nS=1. It would be nice to see this departure beyond all doubt, as it would further strengthen the inflation paradigm. The currently favoured value is nS = 0.95, three sigma away from 1. The other interesting parameter is the ratio r of the tensor to scalar perturbations. The current limit is r < 0.5, while Planck is sensitive down to r = 0.1. If the inflation takes place at energies close to the GUT scale, tensor perturbations might be produced at the observable rate. If nothing is observed, large-field inflation models will be disfavoured.
Planck is going to launch in July 2008. This coincides with the first scheduled collisions at the LHC. Let's hope at least one of us will see something beyond the standard model.
No slided as usual.
Planck is a satellite experiment to measure the Cosmic Microwave Background. It is the next step after the succesful COBE and WMAP missions. Although it looks like any modern vacuum cleaner, the instruments offer 2*10^(-6) resolution of temperature fluctuations (factor 10 better than WMAP) and 5' angular resolution (factor 3 better than WMAP). Thanks to that, Planck will be able to measure more precisely the angular correlations of the CMB temperature fluctuations, especially at higher multipoles (smaller angular scales). This is illustrated on this propaganda picture:
Even more dramatic is the improvement in measuring the CMB polarization. In this context, one splits the polarization into the E-mode and the B-mode (the divergence and the curl). The E-mode can be seeded by scalar gravitational density perturbations which are responsible for at least half of the already observed amplitude of temperature fluctuations. For large angular scales, the E-mode has already been observed by WMAP. The B-mode, on the other hand, must originate from tensor perturbations, that is from gravity waves in the early universe. These gravity waves can be produced by inflation. Planck will measure the E-mode very precisely, while the B-mode is a chalenge. Observing the latter requires quite some luck, since many models of inflation predict the B-mode well below the Planck sensitivity.
Planck is often described as the ultimate CMB temperature measerument. That is because its angular resolution corresponds to the minimal one at which temperature fluctuations of cosmological origin may exist at all. At scales smaller than 5' the cosmological imprint in the CMB is suppresed by the so-called Silk damping. 5' corresponds roughly to the photon mean free path in the early univere so that fluctuations at smaller scales get washed out. However, there is still room for future missions to improve the polarization measurements.
All these precision measurements will serve the noble cause of precision cosmology, that is a precise determination of the cosmological parameters. Currently, the CMB and other data are well described by the Lambda-CDM model, which has become the standard model of cosmology. Lambda-CDM has 6 adjustable parameters. One is the Hubble constant. The other two are the cold (non-relativistic) dark matter and the baryonic matter densities. In this model matter is supplemented by the cosmological constant, so as to end up in the spatially flat universe. Another two parameters describe the spectrum of gravitational perturbations (the scalar amplitude and the spectral index). The last one is the optical depth to reionization. Currently, we know these parametes with a remarkable 10% accuracy. Planck will further improve the accuracy by a factor 2-3, in most cases.
Of course, Planck may find some deviations from the Lambda-CDM model. There exist, in fact, many reasonable extensions that do not require any exotic physics. For example, there may be the already mentioned tensor perturbations, non-gaussianities or the running of the spectral index, which are predictions of certain models of inflation. Planck could find the trace of the hot (relativistic) component of the dark matter. Such contribution might come from the neutrinos, if the sum of their masses is at least 0.2 eV. Furthermore, Planck will accurately test the spatial flatness assumption. The most exciting discovery would be to see that the equation of state of dark energy differs from w=-1 (the cosmological constant). This would point to some dynamical field as the agent responsible for the vacuum energy.
Finally, the Planck will test models of inflation. Although it is unlikely that the measurement will favour one particular model, it may exclude large classes of models. There are two parameters that appear most interesting in this context. One is the spectral index nS. Inflation predicts small departures from the scale invariant Harrison-Zeldovich spectrum corresponding to nS=1. It would be nice to see this departure beyond all doubt, as it would further strengthen the inflation paradigm. The currently favoured value is nS = 0.95, three sigma away from 1. The other interesting parameter is the ratio r of the tensor to scalar perturbations. The current limit is r < 0.5, while Planck is sensitive down to r = 0.1. If the inflation takes place at energies close to the GUT scale, tensor perturbations might be produced at the observable rate. If nothing is observed, large-field inflation models will be disfavoured.
Planck is going to launch in July 2008. This coincides with the first scheduled collisions at the LHC. Let's hope at least one of us will see something beyond the standard model.
No slided as usual.
Sunday, 1 July 2007
Nima's Marmoset
Here is one more splinter of Nima Arkani-Hamed's CERN visit. Apart from a disappointing seminar for theorists, Nima gave another talk advertising his MARMOSET to a mostly experimental audience. OK, I know it was more than two weeks ago, but firstly it's summertime, and secondly, i'm still doing better with the schedule than the LHC.
MARMOSET is a new tool for reconstructing the fundamental theory from the LHC data. When you ask phenomenologists their opinion about MARMOSET, officially they just burst out laughing. Off the record, you could hear something like "...little smartass trying to teach us how to analyze data..." often followed by *!%&?#/ ^@+`@¦$. I cannot judge to what extent this kind of attitude is justified. I guess, it is partly a reaction to overselling the product. To my hopelessly theoretical mind, the talk and the whole idea appeared quite interesting.
In the standard approach, the starting point to interpreting the data is a lagrangian describing the particles and interactions. From the lagrangian, all the necessary parton level amplitudes can be calculated. The result is fed to Monte Carlo simulations that convolute the amplitudes with the parton distribution functions, calculate the phase space distributions and so on. At the end of this chain you get the signal+the SM background that you can compare with the observations.
Nima pointed out several drawbacks of such an approach. The connection between the lagrangian and the predicted signal is very obscure. The lagrangians have typically a large number of free parameters, of which only a few combinations affect the physical observables. Typically, the signal, e.g. a pT distribution, has a small dependence on the precise form of the amplitude. Moreover, at the dawn of the LHC era we have little idea which underlying theory and which lagrangian will turn out relevant. This is in strong contrast with the situation that has reigned in the last 30 years, when the discovered particles (the W and Z bosons, the top quark) were expected and the underlying lagrangian was known. Nima says that this new situation requires new strategies.
Motivated by that, Nima&co came up earlier this year with a paper proposing an intermediate step between the lagrangian and the data. The new framework is called an On-Shell Effective Theory (OSET). The idea is to study physical processes using only kinematic properties of the particles involved. Instead of the lagrangian, one specifies the masses, production cross sections and decay modes of the new particles. The amplitudes are parameterized by one or two shape variables. This simple parameterization is claimed to reproduce the essential phenomenology that could equally well be obtained from more complicated and more time-consuming simulations in the standard approach.
MARMOSET is a package allowing OSET-based Monte Carlo simulations of physical processes. As the input it requires just the new particles + their production and decay modes. Based on this, it generates all possible event topologies and scans the OSET parameters, like production and decay rates, in order to fit the data. The failure implies necessity to add new particles or new decay channels. In this recursive fashion one can extract the essential features of the underlying fundamental theory.
This sounds very simple. So far, the method has been applied under greenhouse conditions to analyze the "black boxes" prepared for the LHC olympics. Can it be useful when it comes to real data? Proffesionals say that MARMOSET does not offer anything they could not, if necessary, implement within half an hour. On the other hand, it looks like a useful tool for laymen. If a clear signal is discovered at the LHC, the package can provide a quick check if your favourite theory is able to reproduce the broad features of the signal. Convince me if I'm wrong... Anyway, we'll see in two years.
The video recording available here.
Subscribe to:
Posts (Atom)