Wednesday, 19 December 2007
Winter Sleep
CERN will soon fall into a winter sleep officialy called the annual closure. In this period the body activity is reduced to a minimum. The personel is sent on a forced leave of absence, only a few scattered die-hards remain on site. There is no life as we know it: the cafeteria is closed, heating is switched off and coffee machines are removed for fear of looters. Sadly, this implies there is nothing to blog on. CERN will come back to life January 7; I may reopen before that.
TH goes to Broadway
Since always, the TH Christmas play has marked the end of the year at CERN. Typically, the script comments on the important events of the year. The audience loves the show for Pythonesque humor, ruthless satire and explicit content. Previous plays created undeniable stars: Georg Weiglein has been known as Shrek ever since, while Ben Allanach's Borat is legendary, especially among the female part of CERN staff. The late DG, after seeing the play in his honour the other day, ended up convinced that the Theory Group is expendable and cut down on our budget.
This year, the play showed a very inside story of the recent DG election. The story begins with Snow Pauss and Eight Dwarves competing in a beauty contest. Three dwarves get most applause:
Update: Photos now available. Here is the scene with the candidates comparing the sizes of their accelerators:
More photos here and here. The uncensored video recording can be downloaded from here.
This year, the play showed a very inside story of the recent DG election. The story begins with Snow Pauss and Eight Dwarves competing in a beauty contest. Three dwarves get most applause:
- "Rolfy" Heuer (played by John Ellis) who promises to give all the CERN geld to the ILC
- "Hairy" Ellis (played by no-matter-who, since the beard covers him all) who promises to cut his hair
- "Roby" Petronzio, who at this stage appears irrelevant, but who will play the key role in the following
Update: Photos now available. Here is the scene with the candidates comparing the sizes of their accelerators:
More photos here and here. The uncensored video recording can be downloaded from here.
Friday, 7 December 2007
Bringing Flavour to Life
This week, CERN TH hosted a workshop entitled Interplay of Collider and Flavour Physics. Flavour physics is concerned with processes involving transitions between different generations of the Standard Model fermions. In recent years, flavour physics has seen considerable experimental progress, prompted mainly by the results from the B-factories. However, in spite of great expectations, these experiments brought nothing but disappointment, despair and loss. In short, all the observed phenomena can be well described by the vulgar flavour structure encoded in the CKM matrix of the Standard Model. This means that new physics, if it exists at the TeV scale, must be highly non-generic; to a good approximation it must be either flavour blind or its flavour structure should be correlated with that of the Standard Model.
90% of the theory talks in the workshop dealt with the MSSM, which is kinda funny given that the MSSM has nothing to say about the origin of flavour. Luckily enough, Christophe Grojean (on behalf of the local tank crew) reviewed another approach to TeV scale physics in which flavour is built in. This approach is commonly referred to as the RS scenario (where RS stands for Raman-Sundrum) and its modern incarnation is pictured above. The fermion mass hierarchies in the Standard Model follow from different localizations of fermions in the fifth dimension. Light fermions are localized close to the UV brane and have little overlap with the Higgs boson who lives close to the IR brane. The third generation (or at least the top quark) lives close to the IR brane, thus interacting more strongly with the Higgs. In this way, the fermion mass hierarchies in the Standard Model can be reproduced with completely anarchic Yukawa couplings between the Higgs and the fermions.
The RS scenario predicts new physics states, for example the Kaluza-Klein modes of gluons, with masses not much larger than TeV, as required by naturalness. If couplings of these KK gluons to the three Standard Model generations were completely random, the KK gluons would mediate flavour violating processes at the rate incompatible with experiment. The situation looks however much better. It turns out that the same mechanism that produces the fermion mass hierarchies also aligns the couplings of the KK gluons in such a way that flavour violating processes are suppressed. Most of the flavour violating processes predicted by the RS scenario are within experimental bounds if the mass of the lightest KK gluon is larger than 4 TeV (the same bound follows from electroweak precision tests). The exception is the parameter $\epsilon_K$ describing CP violation in the Kaon system, that comes out a factor of few too large for a 4 TeV KK gluon. This means that the Yukawa couplings should not be too anarchic in the end and we need additional flavour symmetries...or maybe this is just an accident and the problem will go away in a different incarnation of the RS scenario. Future experiments (not only the LHC, but also super-B-factories, or MEG and PRIME) should give us more hint.
Here you find the slides from Christophe's talk, here those from other talks in the workshop. In 2008 CERN will organize the TH institute Flavour as a Window to New Physics at the LHC to bring even more flavour to life here at CERN.
90% of the theory talks in the workshop dealt with the MSSM, which is kinda funny given that the MSSM has nothing to say about the origin of flavour. Luckily enough, Christophe Grojean (on behalf of the local tank crew) reviewed another approach to TeV scale physics in which flavour is built in. This approach is commonly referred to as the RS scenario (where RS stands for Raman-Sundrum) and its modern incarnation is pictured above. The fermion mass hierarchies in the Standard Model follow from different localizations of fermions in the fifth dimension. Light fermions are localized close to the UV brane and have little overlap with the Higgs boson who lives close to the IR brane. The third generation (or at least the top quark) lives close to the IR brane, thus interacting more strongly with the Higgs. In this way, the fermion mass hierarchies in the Standard Model can be reproduced with completely anarchic Yukawa couplings between the Higgs and the fermions.
The RS scenario predicts new physics states, for example the Kaluza-Klein modes of gluons, with masses not much larger than TeV, as required by naturalness. If couplings of these KK gluons to the three Standard Model generations were completely random, the KK gluons would mediate flavour violating processes at the rate incompatible with experiment. The situation looks however much better. It turns out that the same mechanism that produces the fermion mass hierarchies also aligns the couplings of the KK gluons in such a way that flavour violating processes are suppressed. Most of the flavour violating processes predicted by the RS scenario are within experimental bounds if the mass of the lightest KK gluon is larger than 4 TeV (the same bound follows from electroweak precision tests). The exception is the parameter $\epsilon_K$ describing CP violation in the Kaon system, that comes out a factor of few too large for a 4 TeV KK gluon. This means that the Yukawa couplings should not be too anarchic in the end and we need additional flavour symmetries...or maybe this is just an accident and the problem will go away in a different incarnation of the RS scenario. Future experiments (not only the LHC, but also super-B-factories, or MEG and PRIME) should give us more hint.
Here you find the slides from Christophe's talk, here those from other talks in the workshop. In 2008 CERN will organize the TH institute Flavour as a Window to New Physics at the LHC to bring even more flavour to life here at CERN.
Saturday, 1 December 2007
Auger, Centaurus and Virgo
I wrote there was no interesting seminar last week, but i should mention there was an interesting pre-seminar discussion before the Wednesday Cosmo Coffe. The authors of this article were yelling at the speaker of that seminar. The latter is a member of the Auger collaboration and the former just submitted a comment to ArXiv, in which they put in doubt some of the conclusions obtained by Auger.
The new results from the Pierre Auger Observatory were announced a month ago (see also the post on Backreation). Auger looked at cosmic rays with with ultra-high energies, above $6\cdot 10^{19}$ eV. By theoretical arguments, 90% of such high-energy particles should originate from sources less than 200 Mpc away. This is because of the GZK cut-off -- ultra-high energy particles scatter on the CMB photons, thus losing their energy. Auger claimed to observe a correlation between the arrival direction of the ultra-high energy cosmic rays and the positions of the nearby Active Galactic Nuclei (AGN). This would mean that we finally pinpointed who shoots these tennisball-energy particles at us.
In the aforementioned comment, Igor Tkachev i yego komanda points out that Auger in their statistical analysis did not take into account the $1/r^2$ decrease of the flux with the distance to the observer. The claim is that, once the decrease is taken into account, the AGN hypothesis is disfavored at 99% confidence level. The problem is illustrated on the figure to the right. The crosses mark the positions of the nearby AGN, the color shades indicate the expected flux of the ultra-high energy cosmic rays (if the AGN hypothesis is true) and the circles denote the hits registered by Auger. One can see that most of the ultra-high energy particles arrive from the direction of the Centaur supercluster, while none arrive from the Virgo supercluster. The latter is in fact closer to us, and we would expect at least as many hits from that direction. According to the authors of the comments, the more likely hypothesis is that there is a bright source of the cosmic rays that lies in the direction of the Centaurus supercluster.
It seems that we have to wait for more data to finally resolve the cosmic ray puzzle. In the meantime, here is the local map i found while preparing this post. Just in case you need to find your way home...
The new results from the Pierre Auger Observatory were announced a month ago (see also the post on Backreation). Auger looked at cosmic rays with with ultra-high energies, above $6\cdot 10^{19}$ eV. By theoretical arguments, 90% of such high-energy particles should originate from sources less than 200 Mpc away. This is because of the GZK cut-off -- ultra-high energy particles scatter on the CMB photons, thus losing their energy. Auger claimed to observe a correlation between the arrival direction of the ultra-high energy cosmic rays and the positions of the nearby Active Galactic Nuclei (AGN). This would mean that we finally pinpointed who shoots these tennisball-energy particles at us.
In the aforementioned comment, Igor Tkachev i yego komanda points out that Auger in their statistical analysis did not take into account the $1/r^2$ decrease of the flux with the distance to the observer. The claim is that, once the decrease is taken into account, the AGN hypothesis is disfavored at 99% confidence level. The problem is illustrated on the figure to the right. The crosses mark the positions of the nearby AGN, the color shades indicate the expected flux of the ultra-high energy cosmic rays (if the AGN hypothesis is true) and the circles denote the hits registered by Auger. One can see that most of the ultra-high energy particles arrive from the direction of the Centaur supercluster, while none arrive from the Virgo supercluster. The latter is in fact closer to us, and we would expect at least as many hits from that direction. According to the authors of the comments, the more likely hypothesis is that there is a bright source of the cosmic rays that lies in the direction of the Centaurus supercluster.
It seems that we have to wait for more data to finally resolve the cosmic ray puzzle. In the meantime, here is the local map i found while preparing this post. Just in case you need to find your way home...
Some news from string inflation
There were no exciting seminars last week here at CERN. But, since I promised to resume blogging, here I am with this brief report from the Cosmo Coffee last Wednesday. Cliff Burgess was talking about a new string inflation model that is not around yet and that is not of his making - his friends Conlon-Kallosh-Linde-Quevedo are going to fire it off soon. The inflationary model itself does not violate causality, however. It is realized in this string theory model, which is a variant of KKLT, but it assumes different value of parameters and has a different low-energy phenomenology. The model has a metastable vacuum at large volume of the internal Calabi-Yau space, which arises when stringy loop corrections are taken into account.
The new inflationary model is christened volume inflation, because the volume of the Calabi-Yau is the modulus that assumes the role of the inflaton. Inflation happens when the volume is relatively small. This means that the string scale and the Planck scale are close to each other, which is favorable to obtain large enough density fluctuations. After inflation ends, the volume rolls down to our metastable vacuum where the cosmological constant is small and where the string and Planck scales are separated by several order of magnitudes. The model incorporates a mechanism of converting the energy of the inflaton into radiaton, which helps to avoid the overshooting problem and delivers us from evil of a ten dimensional vacuum.
My problem with this string inflation model is that, as Cliff admitted himself, there is hardly any string inflation here. Inflation operates below the string and compactification scales, so that the 4D field theory approximation applies. At the end of the day, it is just standard inflation in field theory, with some building blocks motivated by currently popular string models. The string origin does not produce any smoking-gun imprint. Nor it gives any insight into the conceptual problems of inflation - the vacuum energy problem and the transplanckian problem.
Well, I know that inflation in field theory works fine, if we turn a blind eye on some conceptual problems and allow for some fine-tuning. So, if you want to impress me, try working on string inflation instead. The only attempt i know of in which the ideas of string theory are crucial is the string gas cosmology approach. But there could be more...
The new inflationary model is christened volume inflation, because the volume of the Calabi-Yau is the modulus that assumes the role of the inflaton. Inflation happens when the volume is relatively small. This means that the string scale and the Planck scale are close to each other, which is favorable to obtain large enough density fluctuations. After inflation ends, the volume rolls down to our metastable vacuum where the cosmological constant is small and where the string and Planck scales are separated by several order of magnitudes. The model incorporates a mechanism of converting the energy of the inflaton into radiaton, which helps to avoid the overshooting problem and delivers us from evil of a ten dimensional vacuum.
My problem with this string inflation model is that, as Cliff admitted himself, there is hardly any string inflation here. Inflation operates below the string and compactification scales, so that the 4D field theory approximation applies. At the end of the day, it is just standard inflation in field theory, with some building blocks motivated by currently popular string models. The string origin does not produce any smoking-gun imprint. Nor it gives any insight into the conceptual problems of inflation - the vacuum energy problem and the transplanckian problem.
Well, I know that inflation in field theory works fine, if we turn a blind eye on some conceptual problems and allow for some fine-tuning. So, if you want to impress me, try working on string inflation instead. The only attempt i know of in which the ideas of string theory are crucial is the string gas cosmology approach. But there could be more...
Saturday, 24 November 2007
Weeds
Weeds are growing on my blog. I haven't even looked at it for the last three weeks. Sorry, I was on tour, performing gigs in some murky East Coast clubs. I'm not an extreme blogger and i need peace and quiet to write. Anyway, interesting things happen only here at CERN, at the centre of the world. I'll resume next week. Till then.
Tuesday, 30 October 2007
Holographic Baryons
Last week Deog Ki Hong was explaining how baryons can be realized in holographic QCD. Holographic QCD is a new sport discipline that consists in modelling the symmetries and dynamics of strongly coupled QCD using weakly coupled theories in more-than-four dimensions. This approach is inspired by the AdS/CFT conjecture that links N=4 superconformal gauge theories with large number of colours and large t'Hooft coupling to higher dimensional supergravity. QCD, however, is neither supersymmetric nor conformal and it is unclear whether a holographic dual exists. In fact, one can argue that it does not. Nevertheless, some bottom-up, phenomenological constructions turned out to be quite successful, against all odds.
There are two roads that lead to holographic QCD. That of Sakai and Sugimoto, rooted in string theory, uses the language of D8-branes embedded in a D4-brane background. A more pedestrian approach takes its origin from the paper of Erlich et al. , who skip the stringy preamble and exploit 5D gauge theories in curved backgrounds. The global chiral symmetry of QCD - $U(2)_L x U(2)_R$ (or U(3)xU(3) if we wish to accommodate strangeness) - is promoted to a local symmetry group in 5D. Besides, the 5D set-up includes a bifundamental scalar field with a vacuum expectation value. The Higgs mechanism breaks the local symmetry group to the diagonal $U(2)_V$, which mimics chiral symmetry breaking by quark condensates in QCD.
So far, most of the studies were focused on the meson sector. Spin 1 mesons (like the rho meson) are identified with Kaluza-Klein modes of the 5D gauge fields. The spin 0 pions are provided by the fifth components of the 5D gauge fields (mixed with pseudoscalars from the Higgs field). Employing usual methods of higher dimensional theories, one can integrate out all heavy Kaluza-Klein modes to obtain a low-energy effective theory for pions. The result can be compared with the so-called chiral lagrangian - the effective theory of low-energy QCD that is used to describe pions and their interactions. Coefficients of the lowest-order operators in the chiral lagrangian have been measured in experiment. Holographic QCD predicts values of (some combinations of) these coefficients, and the results agree with observations. Furthermore, holographic QCD predicts various form factors of the vector mesons that also have been measured in experiment. Again, there is a reasonable agreement with observations. The accuracy is comparable to that achieved in certain 4D approximate models based on large N QCD. All in all, a rather simplistic model provides quite an accurate description of low-lying mesons in low energy QCD.
Baryons are more tricky. In the string picture, they are represented by D5 branes wrapping S5, which sounds scaring. In the 5D field theory picture, they are identified with instanton solitons - still somewhat frightening. But it turns out that these instantons can be effectively described by a pair of 5D spinor fields. Now, study of fermions in a 5D curved background is a piece of cake and has been done ever so often in different contexts. The original instanton picture together with the AdS/CFT dictionary puts some constraints on the fermionic lagrangian (the 5D spinor mass and the Pauli term).
With this simple model at hand, one can repeat the same game that was played with mesons: look at the low-energy effective theory, compare it with the chiral lagrangian predictions and cry out of joy. There are two points from Deog Ki's talk that seem particularly interesting. One is the anomalous magnetic moment of baryons. Holographic QCD predicts that those of proton and neutron should sum up to zero. In reality, $\mu_p = 1.79 \mu_N$, $\mu_n = - 1.91 \mu_N$ where $\mu_N = e/2 m_N$ is the nuclear magneton. The other interesting point concerns electric dipole moments. In the holographic model the electric dipole moment of the neutron can be simply connected to the CP-violating theta angle in QCD (something that seems messy and unintuitive in other approaches) . There is another sum rule that the electric dipole moments of protons and neutrons should sum up to zero.
In summary, simple 5D models yield surprisingly realistic results. Of course, more conventional approaches to QCD may achieve a similar or better level of precision. The drawback is also that the holographic approach has no rigorous connection to QCD, so that it's not clear what is the applicability range and when should we expect the model to fail. Nevertheless, the 5D approach provides a simple and intuitive picture of low-energy QCD phenomena. The experience that is gained could also be useful in case we stumble upon some new strong interactions in the LHC.
Although technological consciousness at CERN TH is clearly improving, some convenors have not yet discovered the blessings of modern means of communication. Translating to English: slides from this talk are not available. Here you can find partly overlapping slides from some conference talk. If you long for more details, check out these papers.
There are two roads that lead to holographic QCD. That of Sakai and Sugimoto, rooted in string theory, uses the language of D8-branes embedded in a D4-brane background. A more pedestrian approach takes its origin from the paper of Erlich et al. , who skip the stringy preamble and exploit 5D gauge theories in curved backgrounds. The global chiral symmetry of QCD - $U(2)_L x U(2)_R$ (or U(3)xU(3) if we wish to accommodate strangeness) - is promoted to a local symmetry group in 5D. Besides, the 5D set-up includes a bifundamental scalar field with a vacuum expectation value. The Higgs mechanism breaks the local symmetry group to the diagonal $U(2)_V$, which mimics chiral symmetry breaking by quark condensates in QCD.
So far, most of the studies were focused on the meson sector. Spin 1 mesons (like the rho meson) are identified with Kaluza-Klein modes of the 5D gauge fields. The spin 0 pions are provided by the fifth components of the 5D gauge fields (mixed with pseudoscalars from the Higgs field). Employing usual methods of higher dimensional theories, one can integrate out all heavy Kaluza-Klein modes to obtain a low-energy effective theory for pions. The result can be compared with the so-called chiral lagrangian - the effective theory of low-energy QCD that is used to describe pions and their interactions. Coefficients of the lowest-order operators in the chiral lagrangian have been measured in experiment. Holographic QCD predicts values of (some combinations of) these coefficients, and the results agree with observations. Furthermore, holographic QCD predicts various form factors of the vector mesons that also have been measured in experiment. Again, there is a reasonable agreement with observations. The accuracy is comparable to that achieved in certain 4D approximate models based on large N QCD. All in all, a rather simplistic model provides quite an accurate description of low-lying mesons in low energy QCD.
Baryons are more tricky. In the string picture, they are represented by D5 branes wrapping S5, which sounds scaring. In the 5D field theory picture, they are identified with instanton solitons - still somewhat frightening. But it turns out that these instantons can be effectively described by a pair of 5D spinor fields. Now, study of fermions in a 5D curved background is a piece of cake and has been done ever so often in different contexts. The original instanton picture together with the AdS/CFT dictionary puts some constraints on the fermionic lagrangian (the 5D spinor mass and the Pauli term).
With this simple model at hand, one can repeat the same game that was played with mesons: look at the low-energy effective theory, compare it with the chiral lagrangian predictions and cry out of joy. There are two points from Deog Ki's talk that seem particularly interesting. One is the anomalous magnetic moment of baryons. Holographic QCD predicts that those of proton and neutron should sum up to zero. In reality, $\mu_p = 1.79 \mu_N$, $\mu_n = - 1.91 \mu_N$ where $\mu_N = e/2 m_N$ is the nuclear magneton. The other interesting point concerns electric dipole moments. In the holographic model the electric dipole moment of the neutron can be simply connected to the CP-violating theta angle in QCD (something that seems messy and unintuitive in other approaches) . There is another sum rule that the electric dipole moments of protons and neutrons should sum up to zero.
In summary, simple 5D models yield surprisingly realistic results. Of course, more conventional approaches to QCD may achieve a similar or better level of precision. The drawback is also that the holographic approach has no rigorous connection to QCD, so that it's not clear what is the applicability range and when should we expect the model to fail. Nevertheless, the 5D approach provides a simple and intuitive picture of low-energy QCD phenomena. The experience that is gained could also be useful in case we stumble upon some new strong interactions in the LHC.
Although technological consciousness at CERN TH is clearly improving, some convenors have not yet discovered the blessings of modern means of communication. Translating to English: slides from this talk are not available. Here you can find partly overlapping slides from some conference talk. If you long for more details, check out these papers.
Tuesday, 23 October 2007
Higgs and Beyond
Every year in autumn, physicists follow atavistic instincts and spam the arXive with conference proceedings. While book proceedings may be of some use as a paper weight, the ones posted on the arXive are typically cut&paste from existing papers awkwardly clipped to fit the page limit. From time to time, however, one may stumble upon a nice, concise review. I recommend the recent short article by Gian Giudice about theories of electroweak symmetry breaking. The article gives a pretty accurate picture of the current state-of-art in the field; it covers everything that deserves being mentioned plus a few things that does not.
As an appetizer, I review here one interesting and not so well known point raised by Gian. Everybody knows that the LEP and SLD precision measurements hint towards a light higgs boson and constrain the standard model higgs boson mass to a quite narrow range. The best fit value, $76_{-24}^{+33}$ GeV, is a little disturbing, given the direct search limit 115 GeV, but still it lies comfortably within the 2-sigma limits. The situation is however more involved, as explained below.
The plot shows the higgs boson mass as inferred from individual measurements . The two most sensitive probes: the leptonic left-right asymmetry and the bottom forward-backward asymmetry do not really agree. The former points to a very light higgs boson, $31_{-19}^{+33}$ GeV, already excluded by LEP, while the latter suggests a heavy higgs $420_{-190}^{+420}$ GeV. Only when we combine these two, partially incompatible measurements in an overall fit of the standard model parameters, we get an estimate that is roughly compatible with the direct search limit. The bottom asymmetry is often considered a mote in the eye, as this is the only LEP measurement that is more than 3-sigma away from the standard model predictions. However, if we decided that this measurement suffers from some systematic errors and removed it from the fit, we would conclude that the standard model is almost excluded by direct higgs searches. That's irony.
This tension raises some hopes that the standard model higss boson is not the whole story and some new physics will emerge at the LHC. For a critical review of the alternatives, read the article.
As an appetizer, I review here one interesting and not so well known point raised by Gian. Everybody knows that the LEP and SLD precision measurements hint towards a light higgs boson and constrain the standard model higgs boson mass to a quite narrow range. The best fit value, $76_{-24}^{+33}$ GeV, is a little disturbing, given the direct search limit 115 GeV, but still it lies comfortably within the 2-sigma limits. The situation is however more involved, as explained below.
The plot shows the higgs boson mass as inferred from individual measurements . The two most sensitive probes: the leptonic left-right asymmetry and the bottom forward-backward asymmetry do not really agree. The former points to a very light higgs boson, $31_{-19}^{+33}$ GeV, already excluded by LEP, while the latter suggests a heavy higgs $420_{-190}^{+420}$ GeV. Only when we combine these two, partially incompatible measurements in an overall fit of the standard model parameters, we get an estimate that is roughly compatible with the direct search limit. The bottom asymmetry is often considered a mote in the eye, as this is the only LEP measurement that is more than 3-sigma away from the standard model predictions. However, if we decided that this measurement suffers from some systematic errors and removed it from the fit, we would conclude that the standard model is almost excluded by direct higgs searches. That's irony.
This tension raises some hopes that the standard model higss boson is not the whole story and some new physics will emerge at the LHC. For a critical review of the alternatives, read the article.
Monday, 22 October 2007
Habemus DG
Here at CERN, Director General is as an important figure as the Pope for catholics or Papa Smurf for small forest creatures. He rules the greatest laboratory on Earth with an iron hand, while diplomatic immunity allows him to avoid parking tickets in Geneva. The current DG, Robert Aymar, is supposed to step down at the end of 2008. Last week, the CERN conclave rolled dice to decide who will take over the post for the following 5 years. The lot fell on Rolf-Dieter Heuer, currently one of the DESY directors. By the end of this year, he should officialy become DG elect.
Good news is that the nominee is a physicist, which might prove useful in the LHC days... It's also good news for ILC supporters, probaly less good for those who bet on CLIC. It's not a good news for me, as my April Fools prank will remain an April Fools prank :-( . By 7:10.
Thursday, 18 October 2007
CLICK
These days CERN hosts the CLIC workshop. CLIC is the famous italian porno comic book, and also the name of a future linear collider that is developed here at CERN. Somewhat disappointedly, the workshop is more focused on the latter. Most of the talks report on very hard-core R&D, but there is something for a wider audience too. A nice wrap-up of physics prospects was delivered yesterday by John Ellis.
If the technology turns out feasible (which should be concluded by the end of the decade), the machine is planned for the year two thousand twenty something. It will collide electrons with positrons at 3-5 TeV center-of-mass energies. This is not a big energy gain as compared to the LHC, but the much cleaner environment of a lepton collider will open up many new opportunities.
A light Standard Model-like higgs boson will be pinned down at the LHC, but a precise study of its properties must wait for a new linear collider. CLIC seems perfectly suited for this. The dominant production mode at a lepton collider is the W fusion whose cross section strongly increases with energy (see the plot). Various rare Higgs decays may be observed and the higgs coupling can be determined quite accurately. For example, the coupling to muons will be determined at the 2% level, while that to bottom quarks at 4% level. This will be a good test of the Standard Model predictions. Also, the higgs self-coupling can be measured, for example, the triple coupling can be determined with a 10% precision.
If the higgs is not found at the LHC, CLIC remains useful. It will be able to measure WW scattering precisely (something that is very tough at the LHC) and determine once and for all if the electroweak breaking is weakly or strongly coupled. Unfortunately, John did not talk about it and only a slight mention is made in the yellow report.
Obviously, if there is some new physics at the TeV scale, CLIC will be able to explore it. Whether we encounter extra dimensions, the little higgs or John's favourite supersymmetry, CLIC will measure the masses and couplings of the new particles. Have a look at the slides for a comparison of the CLIC and ILC performances in several supersymmetric models. CLIC is indispensable if the new particles have TeV or higher masses.
Just like LEP, CLIC will be able to indirectly probe physics up to scales much higher than its center of mass energy. This can be done by searching for effects of four-fermion interactions in the process of e+e- annihilating into muons. Such four-fermion interactions would appear as an effect of heavy virtual particles and they are suppressed by the mass scale of these heavy particles. CLIC will be able to probe these operators up to the scale of a few hundred TeV. In the nightmare scenario - only the Standard Model + the higgs boson found at the LHC - CLIC may tell us if there is some new physics within the reach of the next, more powerful machine. In that case, however , the CLIC performance is not terribly better than that if the ILC, as shown on the plot. CLIC people should better pray for new physics at the LHC.
Slides are available via the workshop page. There you also find video recordings of several other talks at the workshop. For those well-motivated, here is the yellow report.
If the technology turns out feasible (which should be concluded by the end of the decade), the machine is planned for the year two thousand twenty something. It will collide electrons with positrons at 3-5 TeV center-of-mass energies. This is not a big energy gain as compared to the LHC, but the much cleaner environment of a lepton collider will open up many new opportunities.
A light Standard Model-like higgs boson will be pinned down at the LHC, but a precise study of its properties must wait for a new linear collider. CLIC seems perfectly suited for this. The dominant production mode at a lepton collider is the W fusion whose cross section strongly increases with energy (see the plot). Various rare Higgs decays may be observed and the higgs coupling can be determined quite accurately. For example, the coupling to muons will be determined at the 2% level, while that to bottom quarks at 4% level. This will be a good test of the Standard Model predictions. Also, the higgs self-coupling can be measured, for example, the triple coupling can be determined with a 10% precision.
If the higgs is not found at the LHC, CLIC remains useful. It will be able to measure WW scattering precisely (something that is very tough at the LHC) and determine once and for all if the electroweak breaking is weakly or strongly coupled. Unfortunately, John did not talk about it and only a slight mention is made in the yellow report.
Obviously, if there is some new physics at the TeV scale, CLIC will be able to explore it. Whether we encounter extra dimensions, the little higgs or John's favourite supersymmetry, CLIC will measure the masses and couplings of the new particles. Have a look at the slides for a comparison of the CLIC and ILC performances in several supersymmetric models. CLIC is indispensable if the new particles have TeV or higher masses.
Just like LEP, CLIC will be able to indirectly probe physics up to scales much higher than its center of mass energy. This can be done by searching for effects of four-fermion interactions in the process of e+e- annihilating into muons. Such four-fermion interactions would appear as an effect of heavy virtual particles and they are suppressed by the mass scale of these heavy particles. CLIC will be able to probe these operators up to the scale of a few hundred TeV. In the nightmare scenario - only the Standard Model + the higgs boson found at the LHC - CLIC may tell us if there is some new physics within the reach of the next, more powerful machine. In that case, however , the CLIC performance is not terribly better than that if the ILC, as shown on the plot. CLIC people should better pray for new physics at the LHC.
Slides are available via the workshop page. There you also find video recordings of several other talks at the workshop. For those well-motivated, here is the yellow report.
Saturday, 13 October 2007
Football @ CERN
In autumn otherwise important issues like PiMs or inner threesome magnets cease to attract any attention here at CERN. The focus is all on football. More precisely, the CERN indoor football tournament. This year is very special because one of the participating teams consists almost entirely of CERN theorists (since technical problems may sometimes arise, e.g. tying shoelaces, the team includes one experimentalist). The team plays under the name ThC which apparently stands for Theory Club. You probably imagine particle theorists as an awkward lot of short-sighted geeks that trip over their own legs. While this naive picture is correct in 95.4% of cases, the CERN theory group is large enough to have some reasonable players in the Gaussian tail. ThC played their first match last week and did not lose, which is already a better result than any theory team have achieved in CERN's history. The rumour is that if ThC continues to impress, they might choose a theorist for a new DG.
Monday, 8 October 2007
Buffalo Conspiracy
By pure chance i have made an amazing discovery. During my Sunday walk near the LHC point 7 i spotted THIS
Yes, these are buffalos quietly grazing on a pasture above the LHC ring. You would think that buffalos in the Geneva area should be a rare sight. I thought so too at first, but then i realized that they can also be found in Fermilab. And in an instant flash it all became clear.
The only logical explanation is that both the LHC and the Tevatron have been designed by buffalos who are in reality the second most intelligent species on Earth (after dolphins). The hadron collider installations must serve some important purpose that is conspicuous only to higher beings. To keep us humans motivated, the buffalos made up the hierarchy problem, supersymmetry, extra dimensions and string theory. The mystery to solve is what is THE question the LHC is supposed to answer. Once i find out, i'll let you know.
Yes, these are buffalos quietly grazing on a pasture above the LHC ring. You would think that buffalos in the Geneva area should be a rare sight. I thought so too at first, but then i realized that they can also be found in Fermilab. And in an instant flash it all became clear.
The only logical explanation is that both the LHC and the Tevatron have been designed by buffalos who are in reality the second most intelligent species on Earth (after dolphins). The hadron collider installations must serve some important purpose that is conspicuous only to higher beings. To keep us humans motivated, the buffalos made up the hierarchy problem, supersymmetry, extra dimensions and string theory. The mystery to solve is what is THE question the LHC is supposed to answer. Once i find out, i'll let you know.
Friday, 5 October 2007
Degravitation
Last time I scolded the speaker for giving an utterly unattractive seminar title. Degravitation - which is the title of Gia Dvali's talk two weeks ago - is on the other hand very catchy and will certainly attract many Roswell aficionados to my blog. But this post, I'm afraid, is not about classified experiments with gravity performed here at CERN but about a new interesting approach to solving the cosmological constant problem. Gia is going to be around for some time, so you may expect more posts with weird titles in future.
The cosmological constant problem is usually phrased as the question why the vacuum energy is so small. Formulated that way, it is very hard to solve, given large existing contributions (zero-point oscillations, vacuum condensates) and vicious no-go theorems set up by Weinberg. The problem has ruined many lives and transformed some weaker spirits into anthropic believers. Gia does not give up and attempts to tackle the problem from a different angle. He tries to construct a theory where the vacuum energy may be large but it does not induce large effects on the gravitational field. This is of course impossible in Einstein gravity where all forms of energy gravitate. The idea can be realized, however, in certain modified gravity theories.
Gia pursues theories where gravity is strongly modified at large distances, above some distance scale L usually assumed to be of similar size as the observable universe. The idea is to modify the equations of gravity so as to filter out sources whose characteristic length is larger than L. The gravitational field would then ignore the existence of a cosmological constant, which uniformly fills the entire universe.
On a slightly more formal level, Gia advocates a quite general approach where the equations for the gravitational fields can be written as
$ ( 1 - \frac{m^2(p^2)}{p^2} ) G_{\mu \nu} = \frac{1} {2} T_{\mu \nu}$
where, as usual, $G$ is the Einstein tensor and $T$ is the energy-momentum tensor. Deviations from the Einstein theory are parameterized by $m^2(p^2)$ which is a function of momentum (or a funtion of derivatives in the position-space picture). For $m^2=0$, the familiar Einstein equations are recovered. The effects of $m^2$ set in at large distance scales.
At low momenta/large distances one assumes $m^2 \sim L^{-2} (p^2 L^2)^\alpha$ with $0 <= \alpha < 1$. The case $\alpha = 0$ corresponds to adding the graviton mass, the case $\alpha = 1/2$ corresponds to a certain 5D framework called the DGP model (where Gia is the D). In fact, the latter case is the only one for which the full, non-linear, generally covariant completion is known. Other values of $\alpha$ may or may not correspond to a sensible non-linear theory.
Gia argues that any consistent theory effectively described by this kind of filter equations has to be a theory of a massive or resonance graviton. This means that the graviton propagates 5 degrees of freedom and not 2 as in the Einstein theory. In addition to 2 tensor polarizations, there are 2 vector and 1 scalar polarization. The additional polarizations also couple to massive sources and their exchange contributes to the gravitational potential.
Everybody who ever played with modified gravity knows well that Einstein gravity reacts histerically to all manipulations and often breaks down. In the present case what happens is that, once the theory is extended beyond the linear approximation, the scalar polarization gets strongly coupled far below the Planck scale. But Gia argues that one can live with it and, in fact, the strong coupling saves the theory. It is well known since ages that the massive gravity suffers from the so-called van Dam--Veltman discontinuity: the potential between two sources is different than in Einstein gravity, even in the zero-mass limit. The responsible for that is precisely the scalar polarization. The predictions from massive gravity are at odds with precise tests of gravity, for example with observations of the light-bending by the Sun. These predictions, however, are derived using the linear approximation which breaks down near massive sources. Gia argues that the effect of the strong coupling is to suppress the scalar polarization exchange near massive sources and there is no contradiction with experiment.
So the picture of the gravitational field around a massive source in massive or resonance gravity is more complex, as shown to the right. Apart from the Schwarzschild radius, there are two other scales. One is the scale L above which gravity shuts off. The other is the r* scale where the scalar polarization gets strongly coupled. At scales larger than r* we have a sort of scalar-tensor gravity that differs ifrom Einstein gravity. At scales shorter than r* Einstein gravity is approximately recovered up to small corrections. Gia estimates that these latter corrections can be measured in future by the lunar laser ranging experiment if $\alpha$ is of order 1/2.
Coming back to the cosmological constant problem, the analysis is complicated and depends on the non-linear completion of the theory. Gia's analysis shows that this class of theories can indeed degravitate the cosmological constant when $\alpha < 1/2$. I'm not sure if this conclusion is bulletproof since it is derived in a special limit where the equations for the tensor and scalar polarizations decouple. What is certain is that the complete non-linear DGP model (corresponding to $\alpha = 1/2$) does not enjoy the mechanism of degravitation. The hope is that theories with $\alpha < 1/2$ do exist and that a full non-linear analysis will demonstrate one day that the cosmological constant problem is solved.
Slides available here. The paper has been out for 6 months now. It is worth looking at the previous paper of Gia, where the strong coupling phenomenon is discussed at more length. Try also to google degravitation to see how amazing paths the human mind may wander.
Monday, 24 September 2007
About LHC Progress
There are rumours appearing here and there about further imminent delays of the LHC start-up. As for the facts, two weeks ago Lyn Evans, who is the LHC project leader, gave a colloquium about the current status of the LHC. He did not mention any delays, but he described in great detail the efforts they are currently undertaking and the problems that have emerged. The video recording of this talk is available here. In particular, at 52:35 Lyn devotes some time to the plug-in modules between magnet interconnects, whose faults spawned the recent rumours. While I personally don't understand why can't they tie up the magnets with strings (or superstrings in the case of superconducting magnets), i guess those more experimentally oriented may get some insight. Anyway, experts advise not to get excited with this particular problem; there will be many others. As Lyn himself put it, at this point we are far less deterministic.
Saturday, 22 September 2007
Deep Throats and Phase Transitions
Both of my readers have expressed their concern about my lack of activity in the last weeks. OK, let's say i wasn't in mood and go back to business.
Last week John March-Russell gave a seminar entitled Throats with Faster Holographic Phase Transitions. This sounds very encouraging to stay for another coffee in the cafeteria. This time, however, the first intuition would be wrong, as behind this awkward title hides an interesting and less studied piece of physics.
The story is about the Randall-Sundrum model (RS1, to be precise): five-dimensional theory in approximately AdS5 space cut off by the Planck and the TeV branes. The question is what happens with this set-up at high temperatures. There is a point of view from which the high-temperature phase can be simply understood. Here at CERN the local folks believe that the Randall-Sundrum set-up is a dual description of a normal (though strongly coupled) gauge theory in four dimensions. Therefore at high temperatures such phenomena as deconfinement or the emergence of a gluon plasma should be expected. How this phase transition manifests itself in the 5D description?
This last question was studied several years ago in a paper by Creminelli et al based on earlier results by Witten. It turns out that one can write down another solution of the Einstein equations that describes a black hole in the Ads5 space. The black hole solution is a dual description of the high-temperature deconfined phase: the TeV brane (whose presence implies the existence of a mass gap in the low-temperature phase) is hidden behind the black hole horizon.
Which of the two solutions dominates, that is to say, which one gives the dominant contribution to the path integral depends on the free energy F = E- T S. One can calculate that at zero temperature the RS1 solution has lower free energy. But the black hole solution has entropy associated with the black hole horizon and its free energy ends up being lower at high enough temperature. This black hole solution effectively describes a high-temperature expanding universe filled with a hot gluon plasma. As the temperature goes down to the critical value, the RS1 solution with a TeV brane becomes energetically more favorable and a first order phase transition occurs.
Creminelli et al computed the critical temperature at which free energies of the two phases are equal. They also estimated the rate of phase transition between the black hole and the RS1 phases. It turns out that, with the assumption they made about the mechanism stabilizing the fifth dimension, the rate is too low so that the phase transition could never be completed. The universe expands too fast and, although bubbles of the RS1 phase do form, they do not collide. One ends up with an empty ever-inflating universe. From this analysis it seems that, if RS1 is to describe the real world, the temperature of the universe should never exceed the critical one. Although this assumption does not contradict any observations, it makes life more problematic (how to incorporate inflation, bariogenesis...)
According to John, the problem with too slow phase transitions is not general but specific to the stabilization mechanism assumed by Creminelli et al. In his recent paper, John studied a modified version of RS1 - a string-inspired set-up called the Klebanov-Tseytlin throat. From the picture it is obvious that the Klebanov-Tseytlin throat is dual to a punctured condom. John found that in this modifed set-up the phase transition is fast enough to complete. The key to the success seems to be the fact that the different stabilization mechanism results
in a strong breaking of conformal symmetry in IR.
So much for now, more details in the paper. I think this subject is worth knowing about. It connects various areas of physics and cosmology and does not seem to be fully explored yet. First order phase transitions, like the one in RS1, may also leave observable imprints in the gravity waves spectrum, as discussed here.
Slides available.
Last week John March-Russell gave a seminar entitled Throats with Faster Holographic Phase Transitions. This sounds very encouraging to stay for another coffee in the cafeteria. This time, however, the first intuition would be wrong, as behind this awkward title hides an interesting and less studied piece of physics.
The story is about the Randall-Sundrum model (RS1, to be precise): five-dimensional theory in approximately AdS5 space cut off by the Planck and the TeV branes. The question is what happens with this set-up at high temperatures. There is a point of view from which the high-temperature phase can be simply understood. Here at CERN the local folks believe that the Randall-Sundrum set-up is a dual description of a normal (though strongly coupled) gauge theory in four dimensions. Therefore at high temperatures such phenomena as deconfinement or the emergence of a gluon plasma should be expected. How this phase transition manifests itself in the 5D description?
This last question was studied several years ago in a paper by Creminelli et al based on earlier results by Witten. It turns out that one can write down another solution of the Einstein equations that describes a black hole in the Ads5 space. The black hole solution is a dual description of the high-temperature deconfined phase: the TeV brane (whose presence implies the existence of a mass gap in the low-temperature phase) is hidden behind the black hole horizon.
Which of the two solutions dominates, that is to say, which one gives the dominant contribution to the path integral depends on the free energy F = E- T S. One can calculate that at zero temperature the RS1 solution has lower free energy. But the black hole solution has entropy associated with the black hole horizon and its free energy ends up being lower at high enough temperature. This black hole solution effectively describes a high-temperature expanding universe filled with a hot gluon plasma. As the temperature goes down to the critical value, the RS1 solution with a TeV brane becomes energetically more favorable and a first order phase transition occurs.
Creminelli et al computed the critical temperature at which free energies of the two phases are equal. They also estimated the rate of phase transition between the black hole and the RS1 phases. It turns out that, with the assumption they made about the mechanism stabilizing the fifth dimension, the rate is too low so that the phase transition could never be completed. The universe expands too fast and, although bubbles of the RS1 phase do form, they do not collide. One ends up with an empty ever-inflating universe. From this analysis it seems that, if RS1 is to describe the real world, the temperature of the universe should never exceed the critical one. Although this assumption does not contradict any observations, it makes life more problematic (how to incorporate inflation, bariogenesis...)
According to John, the problem with too slow phase transitions is not general but specific to the stabilization mechanism assumed by Creminelli et al. In his recent paper, John studied a modified version of RS1 - a string-inspired set-up called the Klebanov-Tseytlin throat. From the picture it is obvious that the Klebanov-Tseytlin throat is dual to a punctured condom. John found that in this modifed set-up the phase transition is fast enough to complete. The key to the success seems to be the fact that the different stabilization mechanism results
in a strong breaking of conformal symmetry in IR.
So much for now, more details in the paper. I think this subject is worth knowing about. It connects various areas of physics and cosmology and does not seem to be fully explored yet. First order phase transitions, like the one in RS1, may also leave observable imprints in the gravity waves spectrum, as discussed here.
Slides available.
Monday, 3 September 2007
Drowning the Hierarchy Problem
For a change, the third week of the New Physics workshop turned out to be very interesting. In this post I tell you about Gia Dvali and his brand new idea of solving the hierarchy problem. Several other talks last week deserves attention and I hope to find more time to write about it.
Gia first argued for the following result. Suppose there exists N particle species whose mass is of order M. Further suppose that these species transform under exact gauged discrete symmetries. Then there is a lower bound on the Planck scale:
The bound has several interesting consequences. One is that it can be used to drown the hierarchy problem in the multitude of new particles. Just assume there exists something like 10^32 new charged particle species at the TeV scale. If that is the case, the Planck scale cannot help being 16 orders of magnitude higher than the TeV scale. For consistency, gravity must somehow become strongly interacting at the TeV scale, much as in the ADD or RS model, so that the perturbative contributions to the Higgs mass are cut off at the TeV scale. Thus, in Gia's scenario the LHC should also observe the signatures of strongly interacting gravity.
You might say this sounds crazy...and certainly it does. But, in fact, the idea is not more crazy than the large extra dimensions of the ADD model. The latter is also an example of many-species solution to the hierarchy problem. In that case there are also 10^32 degrees of freedom - the Kaluza-Klein modes of the graviton, which make gravity strongly interacting at TeV. The difference is that most of the new particles is much lighter than TeV, which creates all sorts of cosmological and astrophysical problems. In the present case these problems can be more readily circumvented.
Transparencies available on the workshop page.
Gia first argued for the following result. Suppose there exists N particle species whose mass is of order M. Further suppose that these species transform under exact gauged discrete symmetries. Then there is a lower bound on the Planck scale:
$M_p > N^{1/2} M$
The proof goes via black holes. As argued in the old paper by Krauss and Wilczek, gauged discrete symmetries should be respected by quantum gravity. Therefore, if we make a black hole out of particles charged under a gauged discrete symmetry, the total charge will be conserved. For example, take a very large number N of particles, each carrying a separate Z2 charge. Form a black hole using an odd number of particles from each species, so that the black hole carries a Z2^N charge. Then wait and see what happens. According to Hawking, the black hole should evaporate. But it cannot emit the charged particles and reduce its charge before its temperature becomes of order M. The relation between the black hole temperature and mass goes like $T \sim M_p^2/M_{BH}$. Thus, by the time the charge starts to be emitted, the black hole mass is reduced to $M_{BH} = M_p^2/M$. To get rid of all its charge the black hole must emit at least N particles of mass M, so its mass at this point must satisfy $M_{BH} > N M$. From this you easily obtain Gia's bound.The bound has several interesting consequences. One is that it can be used to drown the hierarchy problem in the multitude of new particles. Just assume there exists something like 10^32 new charged particle species at the TeV scale. If that is the case, the Planck scale cannot help being 16 orders of magnitude higher than the TeV scale. For consistency, gravity must somehow become strongly interacting at the TeV scale, much as in the ADD or RS model, so that the perturbative contributions to the Higgs mass are cut off at the TeV scale. Thus, in Gia's scenario the LHC should also observe the signatures of strongly interacting gravity.
You might say this sounds crazy...and certainly it does. But, in fact, the idea is not more crazy than the large extra dimensions of the ADD model. The latter is also an example of many-species solution to the hierarchy problem. In that case there are also 10^32 degrees of freedom - the Kaluza-Klein modes of the graviton, which make gravity strongly interacting at TeV. The difference is that most of the new particles is much lighter than TeV, which creates all sorts of cosmological and astrophysical problems. In the present case these problems can be more readily circumvented.
Transparencies available on the workshop page.
Saturday, 25 August 2007
LHC: The First Year
The New Physics workshop is at its peak. The TH seminar room is cracking in its seams and there are at least two talks every day. Unfortunately, the theory talks last week were ranging from not-so-exctiting to pathetic. Therefore I clench my teeth and report on a talk by an experimentalist. Fabiola Gianotti was talking about the ATLAS status and plans for physics with first data.
Experimentalists (much as happy families) are all alike. They can never resist showing us a hundred spectacular pictures of their cherished detectors and another hundred showing the tunnel at sunrise and sunset. They feel obliged to inform how many kilometers of cable was wasted for each particular part of a detector. They stuff each cm^2 of the transparancies with equally indispensable pieces of information. Usually, this leaves little space for interesting physics. This time, however, was slightly different. Having finished with the pictures, Fabiola told us several interesting things about early physics with the ATLAS detector.
ATLAS is already alive, kicking and collecting data from cosmic-ray muons. The LHC will go online in Spring 2008. That is to say, if all goes smoothly, if nothing else explodes and if Holger Nielsen refrains from pulling the cards. In the unlikely case of everything going as planned, the first collisions at 14 TeV will take place in July. The plan for ATLAS is to collect a hundred inverse picobarn of data by the end of the year, and a few inverse femtobarn by the end of 2009.
The main task in 2008 will be to discover the Standard Model. 100/pb translates roughly to 10^6 W bosons and to 10^5 Z boson. The decays of W and Z have been precisely measured before, so this sample can be used for callibrating the detectors. We will also see some 10^4 top-antitop pairs with fully- or semi-leptonic decays. Thus, the top will be detected on european soil and we will know for sure that the Tevatron didnt make it up. In the first year, however, the top samples will serve no purpose other than callibration. For example, the top mass resolution will be of order 10 GeV (the ultimate precision is 1 GeV but this is a song of the future), far worse than the current Tevatron sensitivity. Apart from that, the QCD jets background at high pT will be measured, something notoriously difficult to estimate theoretically. In short, 2008 studies will be boring but necessary to prepare the stage for future discoveries.
Is there any hope for a discovery in 2008? Fabiola pointed out the case of a 1 TeV vector resonance decaying to an electron pair. This would stand out like a lamppost over the small and well-understood Drell-Yann background and could be discovered with as little as 70/pb of data. The problem is that LEP and LEP2 put an indirect constraint on the mass of such a resonance to be larger than a few TeV. So not much hope for that.
Another hope for an early discovery is supersymmetry with its multitude of coloured particles. From the plot we can see that a gluino lighter than 1.5 TeV could be discovered with 100/pb of data. A quick discovery would be important, as it would set the green light for the ILC project. The problem in this case is that a discovery requires a good understanding of the missing energy spectra. Most likely, we will have to wait till 2009.
The higgs boson is a more difficult case. From the plot below you can see that there is no way to see anything at 100/pb. However, with a few inverse picobarns the whole interesting range of higgs masses will be covered. Thus, according to Fabiola, the higgs puzzle should be resolved by the end of 2009.
Last thing worth mentioning is the ongoing effort to visualize the huge kinetic energy that will be stored in the LHC beam. This time the energy was compared to that of a British aircraft carrier running at 12 knots. The bottom line is the following. If you spot an aircraft carrier on Lac Leman, don't panic, it's just the LHC that lost the beam.
The slides are available via the workshop page here.
Friday, 17 August 2007
Restoration of the Fourth Generation
I'm not as successful as the others in spreading rumours. So i'm sadly returning to writing about things i've seen with my very eyes. Last week there has been several talks that would deserve a post. I pick up perhaps not the most interesting but the most comprehensible one: Graham Kribs talking about Four Generations and Higgs Physics.
There are three generations of quarks and leptons. Everyone knows that the fourth one does not exist. The Bible says it is excluded at 99.999% confidence level. Graham is spreading the heresy of claiming otherwise. He adds the 4th generation of chiral matter to the Standard Model and reanalyses the constraints on the parameters of such a model.
First of all, there are limits from direct searches. The LEPII experiment set the lower limit of roughly 100 GeV on the masses of the 4th electron and neutrino. The bounds on the 4th generation quarks from the Tevatron are more stringent. CDF excludes the 4th up-type quark lighter than 256 GeV, and Graham argues that a similar bound should hold for a down-type quark.
Indirect limits from flavour physics can be taken care of by asuming that the mixing between the three and the fourth generations is small enough. More troubling are the contraints from the electroweak precision tests. For example, the new quark doublet contributes to the S paramater:
$\Delta S = \frac{N_c}{6 \pi} \left ( 1 - 2 Y \log \frac{m_u^2}{m_d^2} \right )$
If the 4th generation quarks are degenarate in mass, the S parameter comes out too large. Splitting the masses could help keeping the S parameter under control, though it generates new contributions to the T parameter. Nevertheless, Graham argues that there is a large enough window, with the up-type quark some 50 GeV heavier than the down-type quark, where all the precision constraints are satisfied.
The fourth generation may be discovered by direct searches at the Tevatron or at the LHC. But its most dramatic effect would be a profound modification of the higgs physics. Within the Standard Model, the higgs is produced in a hadron collider dominantly via the gluon fusion:
The particle in the loop is the top quark - the only coloured particle in the Standard Model with a sizable coupling to the higgs boson. With the 4th generation at hand, we have two more quarks strongly coupled to the higgs boson. As a result, the higgs production cross section dramatically increases, roughly by the factor of 9. The first hint of the 4th generation could come from the Tevatron who would see the higgs with an abnormally large cross section. In fact, the Tevatron has already excluded the 4th generation scenario for a range of the higgs boson mass (see the latest higgs exclusion plot here).
The slides from this and other talks can be found here. If interested in more details, look at the recent paper of Graham. While reading, try not to forget that the 4th generation does not exist.
There are three generations of quarks and leptons. Everyone knows that the fourth one does not exist. The Bible says it is excluded at 99.999% confidence level. Graham is spreading the heresy of claiming otherwise. He adds the 4th generation of chiral matter to the Standard Model and reanalyses the constraints on the parameters of such a model.
First of all, there are limits from direct searches. The LEPII experiment set the lower limit of roughly 100 GeV on the masses of the 4th electron and neutrino. The bounds on the 4th generation quarks from the Tevatron are more stringent. CDF excludes the 4th up-type quark lighter than 256 GeV, and Graham argues that a similar bound should hold for a down-type quark.
Indirect limits from flavour physics can be taken care of by asuming that the mixing between the three and the fourth generations is small enough. More troubling are the contraints from the electroweak precision tests. For example, the new quark doublet contributes to the S paramater:
$\Delta S = \frac{N_c}{6 \pi} \left ( 1 - 2 Y \log \frac{m_u^2}{m_d^2} \right )$
If the 4th generation quarks are degenarate in mass, the S parameter comes out too large. Splitting the masses could help keeping the S parameter under control, though it generates new contributions to the T parameter. Nevertheless, Graham argues that there is a large enough window, with the up-type quark some 50 GeV heavier than the down-type quark, where all the precision constraints are satisfied.
The fourth generation may be discovered by direct searches at the Tevatron or at the LHC. But its most dramatic effect would be a profound modification of the higgs physics. Within the Standard Model, the higgs is produced in a hadron collider dominantly via the gluon fusion:
The particle in the loop is the top quark - the only coloured particle in the Standard Model with a sizable coupling to the higgs boson. With the 4th generation at hand, we have two more quarks strongly coupled to the higgs boson. As a result, the higgs production cross section dramatically increases, roughly by the factor of 9. The first hint of the 4th generation could come from the Tevatron who would see the higgs with an abnormally large cross section. In fact, the Tevatron has already excluded the 4th generation scenario for a range of the higgs boson mass (see the latest higgs exclusion plot here).
The slides from this and other talks can be found here. If interested in more details, look at the recent paper of Graham. While reading, try not to forget that the 4th generation does not exist.
Sunday, 12 August 2007
Model Building Month
Today the TH Institute "New Physics and the LHC" kicks off here at CERN. As the organizers put it, it will be mostly targeted to model-builders, with the participation of string theorists and collider phenomenologists. Regardless of that looming participation, the program looks quite promising. After the dog days of summer you can count on more activity in this blog in the coming weeks.
Also today, on the other end of the world began Lepton-Photon'07. I wouldn't bother to mention if not for certain rumours. Rumours of rumours, so to say. I mean the rumours of the alleged non-standard higgs signal at the Tevatron whispered from blog to blog since May. The new rumour is that the relevant multi-b channel analysis will be finally presented at that conference. In spite of the fact that there are neither photons nor leptons in the signal :-)
Update: well....nothing like that seems to have happened. My deep throat turned out shallow. So we are left to rumours and gossips for some more time....
Update #2: Finally, only CDF presented a new analysis of that channel. Their excess is 1.5 only sigma. Details in this public note, a wrap-up by Tommaso here. Now we're waiting for D0. Come on guys, dont be ashamed...
Also today, on the other end of the world began Lepton-Photon'07. I wouldn't bother to mention if not for certain rumours. Rumours of rumours, so to say. I mean the rumours of the alleged non-standard higgs signal at the Tevatron whispered from blog to blog since May. The new rumour is that the relevant multi-b channel analysis will be finally presented at that conference. In spite of the fact that there are neither photons nor leptons in the signal :-)
Update: well....nothing like that seems to have happened. My deep throat turned out shallow. So we are left to rumours and gossips for some more time....
Update #2: Finally, only CDF presented a new analysis of that channel. Their excess is 1.5 only sigma. Details in this public note, a wrap-up by Tommaso here. Now we're waiting for D0. Come on guys, dont be ashamed...
Saturday, 11 August 2007
Entropic Principle
I'm back. Just in time to report on the curious TH seminar last Wednesday. Jim Cline was talking about The entropic approach to understanding the cosmological constant. I am lucky to live in a special moment of history and to observe fascinating sociological processes going on in particle physics. I'm dying to see where this will lead us...
Everyone knows about the anthropic principle. In short, certain seemingly fundamental parameters are assumed to be enviromental variables that take random values in different parts of the universe. We can live only in those corners where the laws of physics support more or less intelligent life. This could explain why the values of certain physical parameters appear fine-tuned.
One undeniable success of this approach is Weinberg's prediction concerning the cosmological constant. Weinberg assumed that intelligent life needs galaxies. Keeping all other cosmological parameters fixed, galaxies would not form if the energy density in the cosmological constant exceeded that in matter today by roughly a factor of 5000. If all values of the cosmological constant are equally probable we should observe a value close to the upper limit. Weinberg's explanation points to a value some 1oo0 times larger than the observed one. However it remains attractive, lacking any fundamental understanding of the smallness of the cosmological constant.
Weinberg's paper appeared in 1987, well before the first indications for accelerated expansion from supernovae started to show up. That was a prediction in the common-sense use of this word. Since then, the anthropic principle was mostly employed to predict things we already know. The successful applications include the electroweak breaking scale, the dark matter abundance and the CMB temperature. As for the things we don't know, predictions based on the anthropic principle turn out to be less sharp. For example, the supersymmetry breaking scale is predicted by anthropic reasoning to be at a TeV, or at the Planck scale, or somewhere in between.
The entropic principle, proposed earlier this year by Bousso et al, is a new twist in this merry game. The idea is to replace the selection criterion based on the existence of intelligent life with something more objective. Bousso et al argue that the right criterion is maximizing the entropy production in a causal diamond. The argument is that life of any sort cannot exist in a thermal equilibrium but requires free energy. They argue that free energy divided by the temperature at which it is radiated (that is the entropy increase) is a good measure of a complexity that may arise in a given spacetime region. They proceed to calculating the entropy production (dominated by infrared radiation of dust heated by starlight) for various values of the cosmological constant and find that it is peaked close to the observed value. This allows them to conclude that ...this result improves our confidence in the entropic principle and the underlying landscape.... More details on this computation can be found in the paper or in this post on Reference Frame.
Jim in his talk reviewed all that minus the sneer. He also played his own part in the game. He predicted the primordial density contrast. His conclusion is that the observed value 10^(-5) is not unlikely, which further improves our confidence in the entropic principle and the underlying landscape. Pushing the idea to the edge of absurd, he also made an attempt to predict the dark matter abundance. He took an obscure model of gravitino dark matter with non-conserved R-parity. In this model, the dark matter particles decay, thus producing entropy. He argued that the entropy production is maximized close to the observed value of the dark matter abundance, which further improves our confidence in the entropic principle and the underlying landscape. I guess i missed something here. The observed universe is as we see it because the entropy production from the synchrotron radiation of the dark matter decay products may support some odd forms od life? At this point, pulling cards does not seem such a weird idea anymore...
Everyone knows about the anthropic principle. In short, certain seemingly fundamental parameters are assumed to be enviromental variables that take random values in different parts of the universe. We can live only in those corners where the laws of physics support more or less intelligent life. This could explain why the values of certain physical parameters appear fine-tuned.
One undeniable success of this approach is Weinberg's prediction concerning the cosmological constant. Weinberg assumed that intelligent life needs galaxies. Keeping all other cosmological parameters fixed, galaxies would not form if the energy density in the cosmological constant exceeded that in matter today by roughly a factor of 5000. If all values of the cosmological constant are equally probable we should observe a value close to the upper limit. Weinberg's explanation points to a value some 1oo0 times larger than the observed one. However it remains attractive, lacking any fundamental understanding of the smallness of the cosmological constant.
Weinberg's paper appeared in 1987, well before the first indications for accelerated expansion from supernovae started to show up. That was a prediction in the common-sense use of this word. Since then, the anthropic principle was mostly employed to predict things we already know. The successful applications include the electroweak breaking scale, the dark matter abundance and the CMB temperature. As for the things we don't know, predictions based on the anthropic principle turn out to be less sharp. For example, the supersymmetry breaking scale is predicted by anthropic reasoning to be at a TeV, or at the Planck scale, or somewhere in between.
The entropic principle, proposed earlier this year by Bousso et al, is a new twist in this merry game. The idea is to replace the selection criterion based on the existence of intelligent life with something more objective. Bousso et al argue that the right criterion is maximizing the entropy production in a causal diamond. The argument is that life of any sort cannot exist in a thermal equilibrium but requires free energy. They argue that free energy divided by the temperature at which it is radiated (that is the entropy increase) is a good measure of a complexity that may arise in a given spacetime region. They proceed to calculating the entropy production (dominated by infrared radiation of dust heated by starlight) for various values of the cosmological constant and find that it is peaked close to the observed value. This allows them to conclude that ...this result improves our confidence in the entropic principle and the underlying landscape.... More details on this computation can be found in the paper or in this post on Reference Frame.
Jim in his talk reviewed all that minus the sneer. He also played his own part in the game. He predicted the primordial density contrast. His conclusion is that the observed value 10^(-5) is not unlikely, which further improves our confidence in the entropic principle and the underlying landscape. Pushing the idea to the edge of absurd, he also made an attempt to predict the dark matter abundance. He took an obscure model of gravitino dark matter with non-conserved R-parity. In this model, the dark matter particles decay, thus producing entropy. He argued that the entropy production is maximized close to the observed value of the dark matter abundance, which further improves our confidence in the entropic principle and the underlying landscape. I guess i missed something here. The observed universe is as we see it because the entropy production from the synchrotron radiation of the dark matter decay products may support some odd forms od life? At this point, pulling cards does not seem such a weird idea anymore...
Thursday, 2 August 2007
Closed for Summer
Yeah....it's more than two weeks since my last post. I'm afraid you need even more patience. Summer is conference time and holiday time, and hell lot of work to do in between. Clearly, i'm not one of them Bloggin' Titans like him or him or her who would keep blogging from a sinking ship if they had such an opportunity.
My schedule is hopelessly tight until mid-August. After that i should resume my average of 1.5 post per week. Or maybe more, because much will be going on at CERN TH. I'll be back.
Thursday, 12 July 2007
Minimalistic Dark Matter
These days, cosmologists, astrophysicists and all that lot fill every nook and crannie of CERN TH. They also fill the seminar schedule with their deep dark matter talks. I have no choice but to make another dark entry in this blog. Out of 10^6 seminars i've heard this week i pick up the one by Marco Cirelli about Minimal Dark Matter.
The common approach to dark matter is to obtain a candidate particle in a framework designed to solve some other problem of the standard model. The most studied example is the lightest neutralino in the MSSM. In this case, the dark matter particle is a by-product of a theory whose main motivation is to solve the hierarchy problem. This kind of attitiude is perfectly understandable from the psychological point of view. By the same mechanism, a mobile phone sells better if it also plays mp3s, makes photographs and sings lullabies.
But after all, the only solid evidence for the existence of physics beyond the standard model is the observation of dark matter itself. Therefore it seems perfectly justified to construct extensions of the standard model with the sole objective of accommodating dark matter. Such an extension explains all current observations while avoiding the excess baggage of full-fledged theoretical frameworks like supersymmetry. This is the logic behind the model presented by Marco.
The model is not really minimal (adding just a scalar singlet would be more minimal), but it is simple enough and cute. Marco adds one scalar or one Dirac fermion to the standard model, and assigns it a charge under SU(2)_L x U(1)_Y. The only new continuous parameter is the mass M of the new particle. In addition, there is a discrete set of choices of the representation. The obvious requirement is that the representation should contain an electrically neutral particle, which could play the role of the dark matter particle. According to the formula Q = T3 + Y, we can have an SU(2) doublet with the hypercharge Y= 1/2, or a triplet with Y = 0 or Y = 1, or larger multiplets.
Having chosen the representation, one can proceed to calculating the dark matter abundance. In the early universe, the dark matter particles thermalize due to their gauge interactions with W and Z gauge bosons. The final abundance depends on the annihilation cross section, which in turn depends on the unknown mass M and the well known standard model gauge couplings. Thus, by comparing the calculated abundance with the observed one, we can fix the mass of the dark matter particle. Each representation requires a different mass to match the observations. For example, a fermion doublet requires M = 1 TeV, while for a fermion quintuplet with Y = 0 we need M = 10 TeV.
After matching to observations, the model has no free parameters and yields quite definite predictions. For example, here is the prediction for the direct detection cross section:We can see that the cross sections are within reach of the future experiments. The dark matter particle, together with its charged partners in the SU(2) multiplet, could also be discovered at colliders (if M is not heavier than a few TeV) or in the cosmic rays. There are the usual indirect detection signals as well.
The model was originally introduced in a 2005 paper. The recent paper corrects the previous computation of dark matter abundance by including the Sommerfeld corrections.
The common approach to dark matter is to obtain a candidate particle in a framework designed to solve some other problem of the standard model. The most studied example is the lightest neutralino in the MSSM. In this case, the dark matter particle is a by-product of a theory whose main motivation is to solve the hierarchy problem. This kind of attitiude is perfectly understandable from the psychological point of view. By the same mechanism, a mobile phone sells better if it also plays mp3s, makes photographs and sings lullabies.
But after all, the only solid evidence for the existence of physics beyond the standard model is the observation of dark matter itself. Therefore it seems perfectly justified to construct extensions of the standard model with the sole objective of accommodating dark matter. Such an extension explains all current observations while avoiding the excess baggage of full-fledged theoretical frameworks like supersymmetry. This is the logic behind the model presented by Marco.
The model is not really minimal (adding just a scalar singlet would be more minimal), but it is simple enough and cute. Marco adds one scalar or one Dirac fermion to the standard model, and assigns it a charge under SU(2)_L x U(1)_Y. The only new continuous parameter is the mass M of the new particle. In addition, there is a discrete set of choices of the representation. The obvious requirement is that the representation should contain an electrically neutral particle, which could play the role of the dark matter particle. According to the formula Q = T3 + Y, we can have an SU(2) doublet with the hypercharge Y= 1/2, or a triplet with Y = 0 or Y = 1, or larger multiplets.
Having chosen the representation, one can proceed to calculating the dark matter abundance. In the early universe, the dark matter particles thermalize due to their gauge interactions with W and Z gauge bosons. The final abundance depends on the annihilation cross section, which in turn depends on the unknown mass M and the well known standard model gauge couplings. Thus, by comparing the calculated abundance with the observed one, we can fix the mass of the dark matter particle. Each representation requires a different mass to match the observations. For example, a fermion doublet requires M = 1 TeV, while for a fermion quintuplet with Y = 0 we need M = 10 TeV.
After matching to observations, the model has no free parameters and yields quite definite predictions. For example, here is the prediction for the direct detection cross section:We can see that the cross sections are within reach of the future experiments. The dark matter particle, together with its charged partners in the SU(2) multiplet, could also be discovered at colliders (if M is not heavier than a few TeV) or in the cosmic rays. There are the usual indirect detection signals as well.
The model was originally introduced in a 2005 paper. The recent paper corrects the previous computation of dark matter abundance by including the Sommerfeld corrections.
Thursday, 5 July 2007
Countdown to Planck
These days the CERN Theory Institute program is focused on the interplay between cosmology and LHC phenomenology. Throughout July you should expect overrepresentation of cosmology in this blog. Last Wednesday, Julien Lesgourgues talked about the Planck satellite. Julien is worth listening to. First of all, because of his cute French accent. Also, because his talks are always clear and often damn interesting. Here is what he said this time.
Planck is a satellite experiment to measure the Cosmic Microwave Background. It is the next step after the succesful COBE and WMAP missions. Although it looks like any modern vacuum cleaner, the instruments offer 2*10^(-6) resolution of temperature fluctuations (factor 10 better than WMAP) and 5' angular resolution (factor 3 better than WMAP). Thanks to that, Planck will be able to measure more precisely the angular correlations of the CMB temperature fluctuations, especially at higher multipoles (smaller angular scales). This is illustrated on this propaganda picture:
Even more dramatic is the improvement in measuring the CMB polarization. In this context, one splits the polarization into the E-mode and the B-mode (the divergence and the curl). The E-mode can be seeded by scalar gravitational density perturbations which are responsible for at least half of the already observed amplitude of temperature fluctuations. For large angular scales, the E-mode has already been observed by WMAP. The B-mode, on the other hand, must originate from tensor perturbations, that is from gravity waves in the early universe. These gravity waves can be produced by inflation. Planck will measure the E-mode very precisely, while the B-mode is a chalenge. Observing the latter requires quite some luck, since many models of inflation predict the B-mode well below the Planck sensitivity.
Planck is often described as the ultimate CMB temperature measerument. That is because its angular resolution corresponds to the minimal one at which temperature fluctuations of cosmological origin may exist at all. At scales smaller than 5' the cosmological imprint in the CMB is suppresed by the so-called Silk damping. 5' corresponds roughly to the photon mean free path in the early univere so that fluctuations at smaller scales get washed out. However, there is still room for future missions to improve the polarization measurements.
All these precision measurements will serve the noble cause of precision cosmology, that is a precise determination of the cosmological parameters. Currently, the CMB and other data are well described by the Lambda-CDM model, which has become the standard model of cosmology. Lambda-CDM has 6 adjustable parameters. One is the Hubble constant. The other two are the cold (non-relativistic) dark matter and the baryonic matter densities. In this model matter is supplemented by the cosmological constant, so as to end up in the spatially flat universe. Another two parameters describe the spectrum of gravitational perturbations (the scalar amplitude and the spectral index). The last one is the optical depth to reionization. Currently, we know these parametes with a remarkable 10% accuracy. Planck will further improve the accuracy by a factor 2-3, in most cases.
Of course, Planck may find some deviations from the Lambda-CDM model. There exist, in fact, many reasonable extensions that do not require any exotic physics. For example, there may be the already mentioned tensor perturbations, non-gaussianities or the running of the spectral index, which are predictions of certain models of inflation. Planck could find the trace of the hot (relativistic) component of the dark matter. Such contribution might come from the neutrinos, if the sum of their masses is at least 0.2 eV. Furthermore, Planck will accurately test the spatial flatness assumption. The most exciting discovery would be to see that the equation of state of dark energy differs from w=-1 (the cosmological constant). This would point to some dynamical field as the agent responsible for the vacuum energy.
Finally, the Planck will test models of inflation. Although it is unlikely that the measurement will favour one particular model, it may exclude large classes of models. There are two parameters that appear most interesting in this context. One is the spectral index nS. Inflation predicts small departures from the scale invariant Harrison-Zeldovich spectrum corresponding to nS=1. It would be nice to see this departure beyond all doubt, as it would further strengthen the inflation paradigm. The currently favoured value is nS = 0.95, three sigma away from 1. The other interesting parameter is the ratio r of the tensor to scalar perturbations. The current limit is r < 0.5, while Planck is sensitive down to r = 0.1. If the inflation takes place at energies close to the GUT scale, tensor perturbations might be produced at the observable rate. If nothing is observed, large-field inflation models will be disfavoured.
Planck is going to launch in July 2008. This coincides with the first scheduled collisions at the LHC. Let's hope at least one of us will see something beyond the standard model.
No slided as usual.
Planck is a satellite experiment to measure the Cosmic Microwave Background. It is the next step after the succesful COBE and WMAP missions. Although it looks like any modern vacuum cleaner, the instruments offer 2*10^(-6) resolution of temperature fluctuations (factor 10 better than WMAP) and 5' angular resolution (factor 3 better than WMAP). Thanks to that, Planck will be able to measure more precisely the angular correlations of the CMB temperature fluctuations, especially at higher multipoles (smaller angular scales). This is illustrated on this propaganda picture:
Even more dramatic is the improvement in measuring the CMB polarization. In this context, one splits the polarization into the E-mode and the B-mode (the divergence and the curl). The E-mode can be seeded by scalar gravitational density perturbations which are responsible for at least half of the already observed amplitude of temperature fluctuations. For large angular scales, the E-mode has already been observed by WMAP. The B-mode, on the other hand, must originate from tensor perturbations, that is from gravity waves in the early universe. These gravity waves can be produced by inflation. Planck will measure the E-mode very precisely, while the B-mode is a chalenge. Observing the latter requires quite some luck, since many models of inflation predict the B-mode well below the Planck sensitivity.
Planck is often described as the ultimate CMB temperature measerument. That is because its angular resolution corresponds to the minimal one at which temperature fluctuations of cosmological origin may exist at all. At scales smaller than 5' the cosmological imprint in the CMB is suppresed by the so-called Silk damping. 5' corresponds roughly to the photon mean free path in the early univere so that fluctuations at smaller scales get washed out. However, there is still room for future missions to improve the polarization measurements.
All these precision measurements will serve the noble cause of precision cosmology, that is a precise determination of the cosmological parameters. Currently, the CMB and other data are well described by the Lambda-CDM model, which has become the standard model of cosmology. Lambda-CDM has 6 adjustable parameters. One is the Hubble constant. The other two are the cold (non-relativistic) dark matter and the baryonic matter densities. In this model matter is supplemented by the cosmological constant, so as to end up in the spatially flat universe. Another two parameters describe the spectrum of gravitational perturbations (the scalar amplitude and the spectral index). The last one is the optical depth to reionization. Currently, we know these parametes with a remarkable 10% accuracy. Planck will further improve the accuracy by a factor 2-3, in most cases.
Of course, Planck may find some deviations from the Lambda-CDM model. There exist, in fact, many reasonable extensions that do not require any exotic physics. For example, there may be the already mentioned tensor perturbations, non-gaussianities or the running of the spectral index, which are predictions of certain models of inflation. Planck could find the trace of the hot (relativistic) component of the dark matter. Such contribution might come from the neutrinos, if the sum of their masses is at least 0.2 eV. Furthermore, Planck will accurately test the spatial flatness assumption. The most exciting discovery would be to see that the equation of state of dark energy differs from w=-1 (the cosmological constant). This would point to some dynamical field as the agent responsible for the vacuum energy.
Finally, the Planck will test models of inflation. Although it is unlikely that the measurement will favour one particular model, it may exclude large classes of models. There are two parameters that appear most interesting in this context. One is the spectral index nS. Inflation predicts small departures from the scale invariant Harrison-Zeldovich spectrum corresponding to nS=1. It would be nice to see this departure beyond all doubt, as it would further strengthen the inflation paradigm. The currently favoured value is nS = 0.95, three sigma away from 1. The other interesting parameter is the ratio r of the tensor to scalar perturbations. The current limit is r < 0.5, while Planck is sensitive down to r = 0.1. If the inflation takes place at energies close to the GUT scale, tensor perturbations might be produced at the observable rate. If nothing is observed, large-field inflation models will be disfavoured.
Planck is going to launch in July 2008. This coincides with the first scheduled collisions at the LHC. Let's hope at least one of us will see something beyond the standard model.
No slided as usual.
Sunday, 1 July 2007
Nima's Marmoset
Here is one more splinter of Nima Arkani-Hamed's CERN visit. Apart from a disappointing seminar for theorists, Nima gave another talk advertising his MARMOSET to a mostly experimental audience. OK, I know it was more than two weeks ago, but firstly it's summertime, and secondly, i'm still doing better with the schedule than the LHC.
MARMOSET is a new tool for reconstructing the fundamental theory from the LHC data. When you ask phenomenologists their opinion about MARMOSET, officially they just burst out laughing. Off the record, you could hear something like "...little smartass trying to teach us how to analyze data..." often followed by *!%&?#/ ^@+`@¦$. I cannot judge to what extent this kind of attitude is justified. I guess, it is partly a reaction to overselling the product. To my hopelessly theoretical mind, the talk and the whole idea appeared quite interesting.
In the standard approach, the starting point to interpreting the data is a lagrangian describing the particles and interactions. From the lagrangian, all the necessary parton level amplitudes can be calculated. The result is fed to Monte Carlo simulations that convolute the amplitudes with the parton distribution functions, calculate the phase space distributions and so on. At the end of this chain you get the signal+the SM background that you can compare with the observations.
Nima pointed out several drawbacks of such an approach. The connection between the lagrangian and the predicted signal is very obscure. The lagrangians have typically a large number of free parameters, of which only a few combinations affect the physical observables. Typically, the signal, e.g. a pT distribution, has a small dependence on the precise form of the amplitude. Moreover, at the dawn of the LHC era we have little idea which underlying theory and which lagrangian will turn out relevant. This is in strong contrast with the situation that has reigned in the last 30 years, when the discovered particles (the W and Z bosons, the top quark) were expected and the underlying lagrangian was known. Nima says that this new situation requires new strategies.
Motivated by that, Nima&co came up earlier this year with a paper proposing an intermediate step between the lagrangian and the data. The new framework is called an On-Shell Effective Theory (OSET). The idea is to study physical processes using only kinematic properties of the particles involved. Instead of the lagrangian, one specifies the masses, production cross sections and decay modes of the new particles. The amplitudes are parameterized by one or two shape variables. This simple parameterization is claimed to reproduce the essential phenomenology that could equally well be obtained from more complicated and more time-consuming simulations in the standard approach.
MARMOSET is a package allowing OSET-based Monte Carlo simulations of physical processes. As the input it requires just the new particles + their production and decay modes. Based on this, it generates all possible event topologies and scans the OSET parameters, like production and decay rates, in order to fit the data. The failure implies necessity to add new particles or new decay channels. In this recursive fashion one can extract the essential features of the underlying fundamental theory.
This sounds very simple. So far, the method has been applied under greenhouse conditions to analyze the "black boxes" prepared for the LHC olympics. Can it be useful when it comes to real data? Proffesionals say that MARMOSET does not offer anything they could not, if necessary, implement within half an hour. On the other hand, it looks like a useful tool for laymen. If a clear signal is discovered at the LHC, the package can provide a quick check if your favourite theory is able to reproduce the broad features of the signal. Convince me if I'm wrong... Anyway, we'll see in two years.
The video recording available here.
Sunday, 17 June 2007
Xenon10 taking the lead
It is a bit belated news but i still find it worthy to point out. Two weeks ago the XENON collaboration posted a paper with new limits on dark matter from direct detection. The paper follows earlier announcements and a note in Nature.
There must be a flux of dark matter passing constantly through the Earth. If the dark matter particle belongs to the WIMP category, it is supposed to have weak strength interactions with the Standard Model matter. In such a case, once in a while it should collide with an atom and we may observe a recoil energy transferred to the nucleus. The XENON10 expreriment uses xenon as a target. They apply some clever techniques to reduce the background from other particles (e.g. neutrons or photons) penetrating the tank. Interested engineers and hobbyists may find the details in the paper. For all the rest, the important stuff is summarized in this plot:
It shows the limit on the spin-independent scattering cross section of dark matter on nucleons as a function of the dark matter particle mass. The new limits are better by a factor of six than the previous ones from the CDMS-II experiment. I guess that at this point the controversial DAMA detection signal can be forever buried in oblivion. For reference, the plot shows expected cross sections in the constrained MSSM scenario.
Of course, the importance of the XENON10 results goes far beyond constraining the parameter space of some obsolete models. The exciting point is that we are really probing the cross sections expected for WIMP particles. In the coming years we either confirm the WIMP hypothesis or make it implausible. No need to tell that an eventual positive signal would have a huge impact on the LHC program and science fiction literature.
There must be a flux of dark matter passing constantly through the Earth. If the dark matter particle belongs to the WIMP category, it is supposed to have weak strength interactions with the Standard Model matter. In such a case, once in a while it should collide with an atom and we may observe a recoil energy transferred to the nucleus. The XENON10 expreriment uses xenon as a target. They apply some clever techniques to reduce the background from other particles (e.g. neutrons or photons) penetrating the tank. Interested engineers and hobbyists may find the details in the paper. For all the rest, the important stuff is summarized in this plot:
It shows the limit on the spin-independent scattering cross section of dark matter on nucleons as a function of the dark matter particle mass. The new limits are better by a factor of six than the previous ones from the CDMS-II experiment. I guess that at this point the controversial DAMA detection signal can be forever buried in oblivion. For reference, the plot shows expected cross sections in the constrained MSSM scenario.
Of course, the importance of the XENON10 results goes far beyond constraining the parameter space of some obsolete models. The exciting point is that we are really probing the cross sections expected for WIMP particles. In the coming years we either confirm the WIMP hypothesis or make it implausible. No need to tell that an eventual positive signal would have a huge impact on the LHC program and science fiction literature.
Thursday, 14 June 2007
Nima's Horizons
Nima Arkani-Hamed is yet another soul that has fallen to the spell of the landscape. Landscape is a perfect framework for predicting things we already know. Nima is a bit more clever than that. He is concerned with a bigger picture. I mean, with conceptual questions associated with the existence of multiple vacua in a gravity theory.
Nima shared some remarks on the subject yesterday in a theory seminar. He drew an analogy between the present situation in particle physics and the early days of quantum mechanics. In the latter case, quantum effects turned out to imply loss of predictivity concerning the results of individual measurements. Now, he believes, we should again accept certain loss of predictivity due to the landscape and the anthropic selection.
That was an introduction. The bulk of the talk was about the Standard Model landscape. It turns out that the landscape pops up in the minimal Standard Model coupled to Einstein gravity with a small cosmological constant. Such a system has, of course, a unique 4D de Sitter vacuum, but there exist many more vacua with compactified spatial dimensions. One example is the class of vacua with the AdS3xS1 geometry. The radius of the circle is stabilized by the interplay of the small cosmological constant and the Casimir energy induced by the photon, the graviton and the light neutrinos. The funny accident is that these vacua would not exist if the cosmological constant were a factor of 10 larger or if the solar neutrino mass squared difference were a factor of 10 smaller.
Nima went on discussing some more technical details of this setup:
- A near-moduli space of the photon Wilson line wrapping the circle.
- Black string solutions interpolating between 4D and 3D vacua.
- The two-dimensional CFT dual to the Standard Model.
Up to this point, i've been almost fair. Now it's time for a few snide remarks. Listening to this talk was like visiting a flea market. It was colourful and entertaining, but most of the things on display seemed utterly useless. Nima forgot to say what can the Standard Model landscape teach us about the big questions he had addressed at the beginning. Well, certainly the landscape may be present in far simpler setups than string theory. But in this very example it seems totally irrelevant, both from the theoretical and the experimental points of view. I had this guilty feeling that i wouldn't even bother to listen if the name of the speaker were different. Judging from the looks on the others' faces, i wasn't all alone...
Transparencies not available, as usual, though this time it isn't so much of a waste.
Thursday, 31 May 2007
Wilczek's Phantoms
Last Thursday we had a colloquium by Frank Wilczek here at CERN. Frank has made some impressive contributions to such different areas as astrophysics, particle physics and condensed matter physics. He has also provided a lot of beautiful insight into quantum field theory (asymptotic freedom, fractional statistics, color superconductivity). This was a good sign. On the other hand, Frank is also a Nobel-prize winner. This was a bad sign. Nobel-prize winners tend to fill their talks with banal statements written in large font to make a more profound impression. In the end, we observed a fight between good and bad. The latter being the winning side, i'm afraid.
Snide remarks aside, the colloquium had two separate parts. In the first one, Frank was advertising the possibility of phantoms appearing at the LHC. Phantoms refer to light scalar fields that are singlets under the Standard Model gauge group. It is impossible to write renormalizable interactions with the Standard Model fermions (except for the right-handed neutrino), which might be a good reason why we haven't observed such things so far. We can write, however, renormalizable interactions with the Higgs. Therefore the phantom sector could show up once we gain access to the Higgs sector.
Various better or worse motivated theories predict the existence of phantoms. Probably the best motivated phantom is the one especially dear to the speaker: the axion. This was the bridge to the second part of the talk, based on his paper from 2005, where Frank discussed the connection between axions, cosmology and ...the anthropic principle. Yes, Frank is another stray soul that has fallen under the spell of the anthropic principle.
Axions have been proposed to solve the theta-problem in QCD. As a bonus, they proved to be a perfect dark matter candidate. Their present abundance depends on two parameters: the axion scale f where the Peccei-Quinn symmetry is broken and the initial value of the axion field theta_0. The latter is usually expected to be randomly distributed because in the early hot universe no particular value is energetically favoured. With random theta_0 within the observable universe, there is the upper bound f <> 10^12 GeV.
The scenario with a low-scale inflation was the one discussed. Now theta_0 is a parameter randomly chosen by some cosmic accident. One can argue that the resulting probabilistic distribution of dark matter abundance (per log interval) is proportional to the square root of this abundance, favouring large values. Enters the anthropic principle. The observation is that too much dark matter could be dangerous for life. Frank made more precise points about halo formations, black holes, too close star encounters, matter cooling and so on. In short, using the anthropic principle one can cut off the large abundance tail of the probability distribution. One ends up with this plot:
The dotted line is the observed dark matter abundance. The claim is that axions combined with anthropic reasoning perfectly explain dark matter in the universe.
My opinion is that postdictions based on the anthropic principle aren't worth a penny. This kind of results relies mostly on our prejudices concerning the necessary conditions for life to develop. If they prove anything, it is rather limited human imagination (by the way, i once read an SF story about intelligent life formed by fluctuations on a black hole horizon :-) Only impressive, striking and unexpected predictions may count. That's what Weinberg did. That's why some exclaimed "Oh shit, Weinberg got it right". Nobody would ever use a swearword in reaction to the plot above...
For more details, consult the paper. If you are more tolerant to anthropic reasoning, here you can find the video recording.
Snide remarks aside, the colloquium had two separate parts. In the first one, Frank was advertising the possibility of phantoms appearing at the LHC. Phantoms refer to light scalar fields that are singlets under the Standard Model gauge group. It is impossible to write renormalizable interactions with the Standard Model fermions (except for the right-handed neutrino), which might be a good reason why we haven't observed such things so far. We can write, however, renormalizable interactions with the Higgs. Therefore the phantom sector could show up once we gain access to the Higgs sector.
Various better or worse motivated theories predict the existence of phantoms. Probably the best motivated phantom is the one especially dear to the speaker: the axion. This was the bridge to the second part of the talk, based on his paper from 2005, where Frank discussed the connection between axions, cosmology and ...the anthropic principle. Yes, Frank is another stray soul that has fallen under the spell of the anthropic principle.
Axions have been proposed to solve the theta-problem in QCD. As a bonus, they proved to be a perfect dark matter candidate. Their present abundance depends on two parameters: the axion scale f where the Peccei-Quinn symmetry is broken and the initial value of the axion field theta_0. The latter is usually expected to be randomly distributed because in the early hot universe no particular value is energetically favoured. With random theta_0 within the observable universe, there is the upper bound f <> 10^12 GeV.
The scenario with a low-scale inflation was the one discussed. Now theta_0 is a parameter randomly chosen by some cosmic accident. One can argue that the resulting probabilistic distribution of dark matter abundance (per log interval) is proportional to the square root of this abundance, favouring large values. Enters the anthropic principle. The observation is that too much dark matter could be dangerous for life. Frank made more precise points about halo formations, black holes, too close star encounters, matter cooling and so on. In short, using the anthropic principle one can cut off the large abundance tail of the probability distribution. One ends up with this plot:
The dotted line is the observed dark matter abundance. The claim is that axions combined with anthropic reasoning perfectly explain dark matter in the universe.
My opinion is that postdictions based on the anthropic principle aren't worth a penny. This kind of results relies mostly on our prejudices concerning the necessary conditions for life to develop. If they prove anything, it is rather limited human imagination (by the way, i once read an SF story about intelligent life formed by fluctuations on a black hole horizon :-) Only impressive, striking and unexpected predictions may count. That's what Weinberg did. That's why some exclaimed "Oh shit, Weinberg got it right". Nobody would ever use a swearword in reaction to the plot above...
For more details, consult the paper. If you are more tolerant to anthropic reasoning, here you can find the video recording.
Subscribe to:
Posts (Atom)