A week has passed since the LHC jamboree, but the excitement about the 750 GeV diphoton excess has not abated. So far, the scenario from 2011 repeats itself. A significant but not definitive signal is spotted in the early data set by the ATLAS and CMS experiments. This announcement is wrapped in multiple layers of caution and skepticism by experimentalists, but is universally embraced by theorists. What is unprecedented is the scale of theorist's response, which took a form of a hep-ph tsunami. I still need time to digest this feast, and pick up interesting bits among general citation fishing. So today I won't write about the specific models in which the 750 GeV particle could fit: I promise a post on that after the New Year (anyway, the short story is that, oh my god, it could be just anybody). Instead, I want to write about one point that was elucidated by the early papers, namely that the diphoton resonance signal is unlikely to be on its own, and there should be accompanying signals in other channels. In the best case scenario, confirmation of the diphoton signal may come by analyzing the existing data in other channels collected this year or in run-1.
First of all, there should be a dijet signal. Since the new particle is almost certainly produced via gluon collisions, it must be able to decay to gluons as well by time-reversing the production process. This would show up at the LHC as a pair of energetic jets with the invariant mass of 750 GeV. Moreover, in simplest models the 750 GeV particle decays to gluons most of the times. The precise dijet rate is very model-dependent, and in some models it is too small to ever be observed, but typical scenarios predict order 1-10 picobarn dijet cross-sections. This would mean that thousands of such events have been produced in the LHC run-1 and this year in run-2. The plot on the right shows one example of a parameter space (green) overlaid with contours of dijet cross section (red lines) and limits from dijet resonance searches in run-1 with 8 TeV proton collisions (red area). Dijet resonance searches are routine at the LHC, however experimenters usually focus on the high-energy end of the spectrum, far above 1 TeV invariant mass. In fact, the 750 GeV region is not covered at all by the recent LHC searches at 13 TeV proton collision energy.
The next important conclusion is that there should be matching signals in other diboson channels at the 750 GeV invariant mass. For the 125 GeV Higgs boson, the signal was originally discovered in both the γγ and the ZZ final states, while in the WW channel the signal is currently similarly strong. If the 750 GeV particle were anything like the Higgs, the resonance should actually first show in the ZZ and WW final states (due to the large coupling to longitudinal polarizations of vector bosons which is a characteristic feature of Higgs-like particles). From the non-observation of anything interesting in run-1 one can conclude that there must be little Higgsiness in the 750 GeV particle, less than 10%. Nevertheless, even if the particle has nothing to do with the Higgs (for example, if it's a pseudo-scalar), it should still decay to diboson final states once in a while. This is because a neutral scalar cannot couple directly to photons, and the coupling has to arise at the quantum level through some other new electrically charged particles, see the diagram above. The latter couple not only to photons but also to Z bosons, and sometimes to W bosons too. While the details of the branching fractions are highly dependent, diboson signals with comparable rates as the diphoton one are generically predicted. In this respect, the decays of the 750 GeV particle to one photon and one Z boson emerge as a new interesting battleground. For the 125 GeV Higgs boson, decays to Zγ have not been observed yet, but in the heavier mass range the sensitivity is apparently better. ATLAS made a search for high-mass Zγ resonances in the run-1 data, and their limits already put non-trivial constraint on some models explaining the 750 GeV excess. Amusingly, the ATLAS Zγ search has a 1 sigma excess at 730 GeV... CMS has no search in this mass range at all, and both experiments are yet to analyze the run-2 data in this channel. So, in principle, it is well possible that we learn something interesting even before the new round of collisions starts at the LHC.
Another generic prediction is that there should be vector-like quarks or other new colored particles just behind the corner. As mentioned above, such particles are necessary to generate an effective coupling of the 750 GeV particle to photons and gluons. In order for those couplings to be large enough to explain the observed signal, at least one of the new states should have mass below ~1.5 TeV. Limits on vector-like quarks depend on what they decay to, but the typical sensitivity in run-1 is around 800 GeV. In run-2, CMS already presented a search for a charge 5/3 quark decaying to a top quark and a W boson, and they were able to improve the run-1 limits on the new quark's mass from 800 GeV up to 950 GeV. Limits on other type of new quarks should follow shortly.
On a bit more speculative side, ATLAS claims that the best fit to the data is obtained if the 750 GeV resonance is wider than the experimental resolution. While the statistical significance of this statement is not very high, it would have profound consequences if confirmed. Large width is possible only if the 750 GeV particle decays to other final states than photons and gluons. An exciting possibility is that the large width is due to decays to a new hidden sector with new light particles very weakly or not at all coupled to the Standard Model. If these particles do not leave any trace in the detector then the signal is the same monojet signature as that of dark matter: an energetic jet emitted before the collision without matching activity on the other side of the detector. In fact, dark matter searches in run-1 practically exclude the possibility that the large width can be accounted for uniquely by invisible decays (see comments #2 and #13 below). However, if the new particles in the hidden sector couple weakly to the known particles, they can decay back to our sector, possibly after some delay, leading to complicated exotic signals in the detector. This is the so-called hidden valley scenario that my fellow blogger has been promoting for some time. If the 750 GeV particle is confirmed to have a large width, the motivation for this kind of new physics will become very strong. Many of the possible signals that one can imagine in this context are yet to be searched for.
Dijets, dibosons, monojets, vector-like quarks, hidden valley... experimentalists will have hands full this winter. A negative result in any of these searches would not strongly disfavor the diphoton signal, but would provide important clues for model building. A positive signal would break all hell loose, assuming it hasn't yet. So, we are waiting eagerly for further results from the LHC, which should show up around the time of the Moriond conference in March. Watch out for rumors on blogs and Twitter ;)
Thursday, 24 December 2015
Tuesday, 15 December 2015
A new boson at 750 GeV?
ATLAS and CMS presented today a summary of the first LHC results obtained from proton collisions with 13 TeV center-of-mass energy. The most exciting news was of course the 3.6 sigma bump at 750 GeV in the ATLAS diphoton spectrum, roughly coinciding with a 2.6 sigma excess in CMS. When there's an experimental hint of new physics signal there is always this set of questions we must ask:
0. WTF ?
0. Do we understand the background?
1. What is the statistical significance of the signal?
2. Is the signal consistent with other data sets?
3. Is there a theoretical framework to describe it?
4. Does it fit in a bigger scheme of new physics?
Let us go through these questions one by one.
The background. There's several boring ways to make photon pairs at the LHC, but they are expected to produce a spectrum smoothly decreasing with the invariant mass of the pair. This expectation was borne out in run-1, where the 125 GeV Higgs resonance could be clearly seen on top of a nicely smooth background, with no breaks or big wiggles. So it is unlikely that some Standard Model processes (other than a statistical fluctuation) may produce a bump such as the one seen by ATLAS.
The stats. The local significance is 3.6 sigma in ATLAS and 2.6 sigma in CMS. Naively combining the two, we get a more than 4 sigma excess. It is a very large effect, but we have already seen this large fluctuations at the LHC that vanished into thin air (remember 145 GeV Higgs?). Next year's LHC data will be crucial to confirm or exclude the signal. In the meantime, we have a perfect right to be excited.
The consistency. For this discussion, the most important piece of information is the diphoton data collected in run-1 at 8 TeV center-of-mass energy. Both ATLAS and CMS have a small 1 sigma excess around 750 GeV in the run-1 data, but there is no clear bump there. If a new 750 GeV particle is produced in gluon-gluon collisions, then the gain in the signal cross section at 13 TeV compared to 8 TeV is roughly a factor of 5. On the other hand, there was 6 times more data collected at 8 TeV by ATLAS (3.2 fb-1 vs 20 fb-1). This means that the number of signal events produced in ATLAS at 13 TeV should be about 75% of those at 8 TeV, and the ratio is even worse for CMS (who used only 2.6 fb-1). However, the background may grow less fast than the signal, so the power of the 13 TeV and 8 TeV data is comparable. All in all, there is some tension between the run-1 and run-2 data sets, however a mild downward fluctuation of the signal at 8 TeV and/or a mild upward fluctuation at 13 TeV is enough to explain it. One can also try to explain the lack of signal in run-1 by the fact that the 750 GeV particle is a decay product of a heavier resonance (in which case the cross-section gain can be much larger). More careful study with next year's data will be needed to test for this possibility.
The model. This is the easiest part :) A resonance produced in gluon-gluon collisions and decaying to 2 photons? We've seen that already... that's how the Higgs boson was first spotted. So all we need to do is to borrow from the Standard Model. The simplest toy model for the resonance would be a new singlet scalar with mass of 750 GeV coupled to new heavy vector-like quarks that carry color and electric charges. Then quantum effects will produce, in analogy to what happens for the Higgs boson, an effective coupling of the new scalar to gluons and photons:
By a judicious choice of the effective couplings (which depend on masses, charges, and couplings of the vector-like quarks) one can easily fit the diphoton excess observed by ATLAS and CMS. This is shown as the green region in the plot.
If the vector-like quark is a T', that is to say, it has the same color and electric charge as the Standard Model top quark, then the effective couplings must lie along the blue line. The exclusion limits from the run-1 data (mesh) cut through the best fit region, but do not disfavor the model completely. Variation of this minimal toy model will appear in a 100 papers this week.
The big picture. Here sky is the limit. The situation is completely different than 3 years ago, where there was one strongly preferred (and ultimately true) interpretation of the 125 GeV diphoton and 4-lepton signals as the Higgs boson of the Standard Model. On the other hand, scalars coupled to new quarks appear in countless model of new physics. We may be seeing the radial Higgs partner predicted by little Higgs or twin Higgs models, or the dilaton arising due to spontaneous conformal symmetry breaking, or a composite state bound by new strong interactions. It could be a part of the extended Higgs sector in many different context, e.g. the heavy scalar or pseudo-scalar in the two Higgs doublet models. For more spaced out possibilities, it could be the KK graviton of the Randall-Sundrum model, or it could fit some popular supersymmetric models such as the NMSSM. All these scenarios face some challenges. One is to explain why the branching ratio into two photons is large enough to be observed, and why the 750 GeV scalar is not seen in other decays channels, e.g. in decay to W boson pairs which should be the dominant mode for a Higgs-like scalar. However, these challenges are nothing that an average theorist could not resolve by tomorrow morning. Most likely, this particle would just be a small part of the larger structure, possibly having something to do with electroweak symmetry breaking and the hierarchy problem of the Standard Model. If the signal is a real thing, then it may be the beginning of a new golden era in particle physics....
0. Do we understand the background?
1. What is the statistical significance of the signal?
2. Is the signal consistent with other data sets?
3. Is there a theoretical framework to describe it?
4. Does it fit in a bigger scheme of new physics?
Let us go through these questions one by one.
The background. There's several boring ways to make photon pairs at the LHC, but they are expected to produce a spectrum smoothly decreasing with the invariant mass of the pair. This expectation was borne out in run-1, where the 125 GeV Higgs resonance could be clearly seen on top of a nicely smooth background, with no breaks or big wiggles. So it is unlikely that some Standard Model processes (other than a statistical fluctuation) may produce a bump such as the one seen by ATLAS.
The stats. The local significance is 3.6 sigma in ATLAS and 2.6 sigma in CMS. Naively combining the two, we get a more than 4 sigma excess. It is a very large effect, but we have already seen this large fluctuations at the LHC that vanished into thin air (remember 145 GeV Higgs?). Next year's LHC data will be crucial to confirm or exclude the signal. In the meantime, we have a perfect right to be excited.
The consistency. For this discussion, the most important piece of information is the diphoton data collected in run-1 at 8 TeV center-of-mass energy. Both ATLAS and CMS have a small 1 sigma excess around 750 GeV in the run-1 data, but there is no clear bump there. If a new 750 GeV particle is produced in gluon-gluon collisions, then the gain in the signal cross section at 13 TeV compared to 8 TeV is roughly a factor of 5. On the other hand, there was 6 times more data collected at 8 TeV by ATLAS (3.2 fb-1 vs 20 fb-1). This means that the number of signal events produced in ATLAS at 13 TeV should be about 75% of those at 8 TeV, and the ratio is even worse for CMS (who used only 2.6 fb-1). However, the background may grow less fast than the signal, so the power of the 13 TeV and 8 TeV data is comparable. All in all, there is some tension between the run-1 and run-2 data sets, however a mild downward fluctuation of the signal at 8 TeV and/or a mild upward fluctuation at 13 TeV is enough to explain it. One can also try to explain the lack of signal in run-1 by the fact that the 750 GeV particle is a decay product of a heavier resonance (in which case the cross-section gain can be much larger). More careful study with next year's data will be needed to test for this possibility.
The model. This is the easiest part :) A resonance produced in gluon-gluon collisions and decaying to 2 photons? We've seen that already... that's how the Higgs boson was first spotted. So all we need to do is to borrow from the Standard Model. The simplest toy model for the resonance would be a new singlet scalar with mass of 750 GeV coupled to new heavy vector-like quarks that carry color and electric charges. Then quantum effects will produce, in analogy to what happens for the Higgs boson, an effective coupling of the new scalar to gluons and photons:
By a judicious choice of the effective couplings (which depend on masses, charges, and couplings of the vector-like quarks) one can easily fit the diphoton excess observed by ATLAS and CMS. This is shown as the green region in the plot.
If the vector-like quark is a T', that is to say, it has the same color and electric charge as the Standard Model top quark, then the effective couplings must lie along the blue line. The exclusion limits from the run-1 data (mesh) cut through the best fit region, but do not disfavor the model completely. Variation of this minimal toy model will appear in a 100 papers this week.
The big picture. Here sky is the limit. The situation is completely different than 3 years ago, where there was one strongly preferred (and ultimately true) interpretation of the 125 GeV diphoton and 4-lepton signals as the Higgs boson of the Standard Model. On the other hand, scalars coupled to new quarks appear in countless model of new physics. We may be seeing the radial Higgs partner predicted by little Higgs or twin Higgs models, or the dilaton arising due to spontaneous conformal symmetry breaking, or a composite state bound by new strong interactions. It could be a part of the extended Higgs sector in many different context, e.g. the heavy scalar or pseudo-scalar in the two Higgs doublet models. For more spaced out possibilities, it could be the KK graviton of the Randall-Sundrum model, or it could fit some popular supersymmetric models such as the NMSSM. All these scenarios face some challenges. One is to explain why the branching ratio into two photons is large enough to be observed, and why the 750 GeV scalar is not seen in other decays channels, e.g. in decay to W boson pairs which should be the dominant mode for a Higgs-like scalar. However, these challenges are nothing that an average theorist could not resolve by tomorrow morning. Most likely, this particle would just be a small part of the larger structure, possibly having something to do with electroweak symmetry breaking and the hierarchy problem of the Standard Model. If the signal is a real thing, then it may be the beginning of a new golden era in particle physics....
Thursday, 19 November 2015
Leptoquarks strike back
Leptoquarks are hypothetical scalar particles that carry both color and electroweak charges. Nothing like that exists in the Standard Model, where the only scalar is the Higgs who is a color singlet. In the particle community, leptoquarks enjoy the similar status as Nickelback in music: everybody's heard of them, but no one likes them. It is not completely clear why... maybe they are confused with leprechauns, maybe because they sometimes lead to proton decay, or maybe because they rarely arise in cherished models of new physics. However, recently there has been some renewed interest in leptoquarks. The reason is that these particles seem well equipped to address the hottest topic of this year - the B meson anomalies.
There are at least 3 distinct B-meson anomalies that are currently intriguing:
If both λb and λs are non-zero then a tree-level leptoquark exchange can mediate the b-quark decay b → s μ μ. This contribution adds up to the Standard Model amplitudes mediated by loops of W bosons, and thus affects the B-meson observables. It turns out that the first two anomalies listed above can be fit if the leptoquark mass is in the 1-50 TeV range, depending on the magnitude of λb and λs.
Also the 3rd anomaly above can be easily explained by leptoquarks. One example from this paper is a leptoquark transforming as (3,1,-1/3) and coupling to matter as
This particle contributes to b → c τ ν, adding up to the tree-level W boson contribution, and is capable of explaining the apparent excess of semi-leptonic B meson decays into D mesons and tau leptons observed by the BaBar, Belle, and LHCb experiments. The difference to the previous case is that this leptoquark has to be less massive, closer to the TeV scale, because it has to compete with the tree-level contribution in the Standard Model.
There are more kinds of leptoquarks with different charges that allow for Yukawa couplings to matter. Some of them could also explain the 3 sigma discrepancy of the experimentally measured muon anomalous magnetic moment with the Standard Model prediction. Actually, a recent paper says that the (3,1,-1/3) leptoquark discussed above can explain all B-meson and muon g-2 anomalies simultaneously, through a combination of tree-level and loop effects. In any case, this is something to look out for in this and next year's data. If a leptoquark is indeed the culprit for the B → Dτν excess, it should be within reach of the 13 TeV run (for the 1st two anomalies it may well be too heavy to produce at the LHC). The current reach for leptoquarks is up to 1 TeV mass (strongly depending on model details), see e.g. the recent ATLAS and CMS analyses. So far these searches have provoked little public interest, but that may change soon...
There are at least 3 distinct B-meson anomalies that are currently intriguing:
- A few sigma (2 to 4, depending who you ask) deviation in differential distribution of B → K*μμ decays,
- 2.6 sigma violation of lepton flavor universality in B → Kμμ vs B → Kee decays,
- 3.5 sigma violation of lepton flavor universality, but this time in B → Dτν vs B → Dμν decays.
If both λb and λs are non-zero then a tree-level leptoquark exchange can mediate the b-quark decay b → s μ μ. This contribution adds up to the Standard Model amplitudes mediated by loops of W bosons, and thus affects the B-meson observables. It turns out that the first two anomalies listed above can be fit if the leptoquark mass is in the 1-50 TeV range, depending on the magnitude of λb and λs.
Also the 3rd anomaly above can be easily explained by leptoquarks. One example from this paper is a leptoquark transforming as (3,1,-1/3) and coupling to matter as
This particle contributes to b → c τ ν, adding up to the tree-level W boson contribution, and is capable of explaining the apparent excess of semi-leptonic B meson decays into D mesons and tau leptons observed by the BaBar, Belle, and LHCb experiments. The difference to the previous case is that this leptoquark has to be less massive, closer to the TeV scale, because it has to compete with the tree-level contribution in the Standard Model.
There are more kinds of leptoquarks with different charges that allow for Yukawa couplings to matter. Some of them could also explain the 3 sigma discrepancy of the experimentally measured muon anomalous magnetic moment with the Standard Model prediction. Actually, a recent paper says that the (3,1,-1/3) leptoquark discussed above can explain all B-meson and muon g-2 anomalies simultaneously, through a combination of tree-level and loop effects. In any case, this is something to look out for in this and next year's data. If a leptoquark is indeed the culprit for the B → Dτν excess, it should be within reach of the 13 TeV run (for the 1st two anomalies it may well be too heavy to produce at the LHC). The current reach for leptoquarks is up to 1 TeV mass (strongly depending on model details), see e.g. the recent ATLAS and CMS analyses. So far these searches have provoked little public interest, but that may change soon...
Thursday, 12 November 2015
A year at 13 TeV
A week ago the LHC finished the 2015 run of 13 TeV proton collisions. The counter in ATLAS stopped exactly at 4 inverse femtobarns. CMS reports just 10% less, however it is not clear what fraction of these data is collected with their magnet on (probably about a half). Anyway, it should have been better, it could have been worse... 4 fb-1 is one fifth of what ATLAS and CMS collected in the glorious year 2012. On the other hand, the higher collision energy in 2015 translates to larger production cross sections, even for particles within the kinematic reach of the 8 TeV collisions. How this trade off work in practice depends on the studied process. A few examples are shown in the plot below
We see that, for processes initiated by collisions of a quark inside one proton with an antiquark inside the other proton, the cross section gain is the least favorable. Still, for hypothetical resonances heavier than ~1.7 TeV, more signal events were produced in the 2015 run than in the previous one. For example, for a 2 TeV W-prime resonance, possibly observed by ATLAS in the 8 TeV data, the net gain is 50%, corresponding to roughly 15 events predicted in the 13 TeV data. However, the plot does not tell the whole story, because the backgrounds have increased as well. Moreover, when the main background originates from gluon-gluon collisions (as is the case for the W-prime search in the hadronic channel), it grows faster than the signal. Thus, if the 2 TeV W' is really there, the significance of the signal in the 13 TeV data should be comparable to that in the 8 TeV data in spite of the larger event rate. That will not be enough to fully clarify the situation, but the new data may make the story much more exciting if the excess reappears; or much less exciting if it does not... When backgrounds are not an issue (for example, for high-mass dilepton resonances) the improvement in this year's data should be more spectacular.
We also see that, for new physics processes initiated by collisions of a gluon in 1 proton with another gluon in the other proton, the 13 TeV run is superior everywhere above the TeV scale, and the signal enhancement is more spectacular. For example, at 2 TeV one gains a factor of 3 in signal rate. Therefore, models where the ATLAS diboson excess is explained via a Higgs-like scalar resonance will be tested very soon. The reach will also be extended for other hypothetical particles pair-produced in gluon collisions, such as gluinos in the minimal supersymmetric model. The current lower limit on the gluino mass obtained by the 8 TeV run is m≳1.4 TeV (for decoupled squarks and massless neutralino). For this mass, the signal gain in the 2015 run is roughly a factor of 6. Hence we can expect the gluino mass limits will be pushed upwards soon, by about 200 GeV or so.
Summarizing, we have a right to expect some interesting results during this winter break. The chances for a discovery in this year's data are non-zero, and chances for a tantalizing hints of new physics (whether a real thing or a background fluctuation) are considerable. Limits on certain imaginary particles will be somewhat improved. However, contrary to my hopes/fears, this year is not yet the decisive one for particle physics. The next one will be.
We also see that, for new physics processes initiated by collisions of a gluon in 1 proton with another gluon in the other proton, the 13 TeV run is superior everywhere above the TeV scale, and the signal enhancement is more spectacular. For example, at 2 TeV one gains a factor of 3 in signal rate. Therefore, models where the ATLAS diboson excess is explained via a Higgs-like scalar resonance will be tested very soon. The reach will also be extended for other hypothetical particles pair-produced in gluon collisions, such as gluinos in the minimal supersymmetric model. The current lower limit on the gluino mass obtained by the 8 TeV run is m≳1.4 TeV (for decoupled squarks and massless neutralino). For this mass, the signal gain in the 2015 run is roughly a factor of 6. Hence we can expect the gluino mass limits will be pushed upwards soon, by about 200 GeV or so.
Summarizing, we have a right to expect some interesting results during this winter break. The chances for a discovery in this year's data are non-zero, and chances for a tantalizing hints of new physics (whether a real thing or a background fluctuation) are considerable. Limits on certain imaginary particles will be somewhat improved. However, contrary to my hopes/fears, this year is not yet the decisive one for particle physics. The next one will be.
Saturday, 26 September 2015
Weekend Plot: celebration of a femtobarn
The LHC run-2 has reached the psychologically important point where the amount the integrated luminosity exceeds one inverse femtobarn. To celebrate this event, here is a plot showing the ratio of the number of hypothetical resonances produced so far in run-2 and in run-1 collisions as a function of the resonance mass:
In the run-1 at 8 TeV, ATLAS and CMS collected around 20 fb-1. For 13 TeV collisions the amount of data is currently 1/20 of that, however the hypothetical cross section for producing hypothetical TeV scale particles is much larger. For heavy enough particles the gain in cross section is larger than 1/20, which means that run-2 now probes a previously unexplored parameter space (this simplistic argument ignores the fact that backgrounds are also larger at 13 TeV, but it's approximately correct at very high masses where backgrounds are small). Currently, the turning point is about 2.7 TeV for resonances produced, at the fundamental level, in quark-antiquark collisions, and even below that for those produced in gluon-gluon collisions. The current plan is to continue the physics run till early November which, at this pace, should give us around 3 fb-1 to brood upon during the winter break. This means that the 2015 run will stop short before sorting out the existence of the 2 TeV di-boson resonance indicated by run-1 data. Unless, of course, the physics run is extended at the expense of heavy-ion collisions scheduled for November ;)
In the run-1 at 8 TeV, ATLAS and CMS collected around 20 fb-1. For 13 TeV collisions the amount of data is currently 1/20 of that, however the hypothetical cross section for producing hypothetical TeV scale particles is much larger. For heavy enough particles the gain in cross section is larger than 1/20, which means that run-2 now probes a previously unexplored parameter space (this simplistic argument ignores the fact that backgrounds are also larger at 13 TeV, but it's approximately correct at very high masses where backgrounds are small). Currently, the turning point is about 2.7 TeV for resonances produced, at the fundamental level, in quark-antiquark collisions, and even below that for those produced in gluon-gluon collisions. The current plan is to continue the physics run till early November which, at this pace, should give us around 3 fb-1 to brood upon during the winter break. This means that the 2015 run will stop short before sorting out the existence of the 2 TeV di-boson resonance indicated by run-1 data. Unless, of course, the physics run is extended at the expense of heavy-ion collisions scheduled for November ;)
Saturday, 12 September 2015
What can we learn from LHC Higgs combination
Recently, ATLAS and CMS released the first combination of their Higgs results. Of course, one should not expect any big news here: combination of two datasets that agree very well with the Standard Model predictions has to agree very well with the Standard Model predictions... However, it is interesting to ask what the new results change at the quantitative level concerning our constraints on Higgs boson couplings to matter.
First, experiments quote the overall signal strength μ, which measures how many Higgs events were detected at the LHC in all possible production and decay channels compared to the expectations in the Standard Model. The latter, by definition, is μ=1. Now, if you had been impatient to wait for the official combination, you could have made a naive one using the previous ATLAS (μ=1.18±0.14) and CMS (μ=1±0.14) results. Assuming the errors are Gaussian and uncorrelated, one would obtains this way the combined μ=1.09±0.10. Instead, the true number is (drum roll)
So, the official and naive numbers are practically the same. This result puts important constraints on certain models of new physics. One important corollary is that the Higgs boson branching fraction to invisible (or any undetected exotic) decays is limited as Br(h → invisible) ≤ 13% at 95% confidence level, assuming the Higgs production is not affected by new physics.
From the fact that, for the overall signal strength, the naive and official combinations coincide one should not conclude that the work ATLAS and CMS has done together is useless. As one can see above, the statistical and systematic errors are comparable for that measurement, therefore a naive combination is not guaranteed to work. It happens in this particular case that the multiple nuisance parameters considered in the analysis pull essentially in random directions. But it could well have been different. Indeed, the more one enters into details, the more the impact of the official combination becomes relevant. For the signal strength measured in particular final states of the Higgs decay the differences are more pronounced:
One can see that the naive combination somewhat underestimates the errors. Moreover, for the WW final state the central value is shifted by half a sigma (this is mainly because, in this channel, the individual ATLAS and CMS measurements that go into the combination seem to be different than the previously published ones). The difference is even more clearly visible for 2-dimensional fits, where the Higgs production cross section via the gluon fusion (ggf) and vector boson fusion (vbf) are treated as free parameters. This plot compares the regions preferred at 68% confidence level by the official and naive combinations:
There is a significant shift of the WW and also of the ττ ellipse. All in all, the LHC Higgs combination brings no revolution, but it allows one to obtain more precise and more reliable constraints on some new physics models. The more detailed information is released, the more useful the combined results become.
First, experiments quote the overall signal strength μ, which measures how many Higgs events were detected at the LHC in all possible production and decay channels compared to the expectations in the Standard Model. The latter, by definition, is μ=1. Now, if you had been impatient to wait for the official combination, you could have made a naive one using the previous ATLAS (μ=1.18±0.14) and CMS (μ=1±0.14) results. Assuming the errors are Gaussian and uncorrelated, one would obtains this way the combined μ=1.09±0.10. Instead, the true number is (drum roll)
So, the official and naive numbers are practically the same. This result puts important constraints on certain models of new physics. One important corollary is that the Higgs boson branching fraction to invisible (or any undetected exotic) decays is limited as Br(h → invisible) ≤ 13% at 95% confidence level, assuming the Higgs production is not affected by new physics.
From the fact that, for the overall signal strength, the naive and official combinations coincide one should not conclude that the work ATLAS and CMS has done together is useless. As one can see above, the statistical and systematic errors are comparable for that measurement, therefore a naive combination is not guaranteed to work. It happens in this particular case that the multiple nuisance parameters considered in the analysis pull essentially in random directions. But it could well have been different. Indeed, the more one enters into details, the more the impact of the official combination becomes relevant. For the signal strength measured in particular final states of the Higgs decay the differences are more pronounced:
Sunday, 30 August 2015
Weekend plot: SUSY limits rehashed
Lake Tahoe is famous for preserving dead bodies in good condition over many years, therefore it is a natural place to organize the SUSY conference. As a tribute to this event, here is a plot from a recent ATLAS meta-analysis:
It shows the constraints on the gluino and the lightest neutralino masses in the pMSSM. Usually, the most transparent way to present experimental limits on supersymmetry is by using simplified models. This consists in picking two or more particles out of the MSSM zoo, and assuming that they are the only ones playing role in the analyzed process. For example, a popular simplified model has a gluino and a stable neutralino interacting via an effective quark-gluino-antiquark-neutralino coupling. In this model, gluino pairs are produced at the LHC through their couplings to ordinary gluons, and then each promptly decays to 2 quarks and a neutralino via the effective couplings. This shows up in a detector as 4 or more jets and the missing energy carried off by the neutralinos. Within this simplified model, one can thus interpret the LHC multi-jets + missing energy data as constraints on 2 parameters: the gluino mass and the lightest neutralino mass. One result of this analysis is that, for a massless neutralino, the gluino mass is constrained to be bigger than about 1.4 TeV, see the white line in the plot.
A non-trivial question is what happens to these limits if one starts to fiddle with the remaining one hundred parameters of the MSSM. ATLAS tackles this question in the framework of the pMSSM, which is a version of the MSSM where all flavor and CP violating parameters are set to zero. In the resulting 19-dimensional parameter space, ATLAS picks a large number of points that reproduce the correct Higgs mass and are consistent with various precision measurements. Then they check what fraction of the points with a given m_gluino and m_neutralino survives the constraints from all ATLAS supersymmetry searches so far. Of course, the results will depend on how the parameter space is sampled, but nevertheless we can get a feeling of how robust are the limits obtained in simplified models. It is interesting that the gluino mass limits turn out to be quite robust. From the plot one can see that, for a light neutralino, it is difficult to live with m_gluino < 1.4 TeV, and that there's no surviving points with m_gluino < 1.1 TeV. Similar conclusion are not true for all simplified models, e.g., the limits on squark masses in simplified models can be very much relaxed by going to the larger parameter space of the pMSSM. Another thing worth noticing is that the blind spot near the m_gluino=m_neutralino diagonal is not really there: it is covered by ATLAS monojet searches.
The LHC run-2 is going slow, so we still have some time to play with the run-1 data. See the ATLAS paper for many more plots. New stronger limits on supersymmetry are not expected before next summer.
It shows the constraints on the gluino and the lightest neutralino masses in the pMSSM. Usually, the most transparent way to present experimental limits on supersymmetry is by using simplified models. This consists in picking two or more particles out of the MSSM zoo, and assuming that they are the only ones playing role in the analyzed process. For example, a popular simplified model has a gluino and a stable neutralino interacting via an effective quark-gluino-antiquark-neutralino coupling. In this model, gluino pairs are produced at the LHC through their couplings to ordinary gluons, and then each promptly decays to 2 quarks and a neutralino via the effective couplings. This shows up in a detector as 4 or more jets and the missing energy carried off by the neutralinos. Within this simplified model, one can thus interpret the LHC multi-jets + missing energy data as constraints on 2 parameters: the gluino mass and the lightest neutralino mass. One result of this analysis is that, for a massless neutralino, the gluino mass is constrained to be bigger than about 1.4 TeV, see the white line in the plot.
A non-trivial question is what happens to these limits if one starts to fiddle with the remaining one hundred parameters of the MSSM. ATLAS tackles this question in the framework of the pMSSM, which is a version of the MSSM where all flavor and CP violating parameters are set to zero. In the resulting 19-dimensional parameter space, ATLAS picks a large number of points that reproduce the correct Higgs mass and are consistent with various precision measurements. Then they check what fraction of the points with a given m_gluino and m_neutralino survives the constraints from all ATLAS supersymmetry searches so far. Of course, the results will depend on how the parameter space is sampled, but nevertheless we can get a feeling of how robust are the limits obtained in simplified models. It is interesting that the gluino mass limits turn out to be quite robust. From the plot one can see that, for a light neutralino, it is difficult to live with m_gluino < 1.4 TeV, and that there's no surviving points with m_gluino < 1.1 TeV. Similar conclusion are not true for all simplified models, e.g., the limits on squark masses in simplified models can be very much relaxed by going to the larger parameter space of the pMSSM. Another thing worth noticing is that the blind spot near the m_gluino=m_neutralino diagonal is not really there: it is covered by ATLAS monojet searches.
The LHC run-2 is going slow, so we still have some time to play with the run-1 data. See the ATLAS paper for many more plots. New stronger limits on supersymmetry are not expected before next summer.
Saturday, 15 August 2015
Weekend plot: ATLAS weighs in on Higgs to Tau Mu
After a long summer hiatus, here is a simple warm-up plot:
It displays the results of ATLAS and CMS searches for h→τμ decays, together with their naive combination. The LHC collaborations have already observed Higgs boson decays into two 2 τ leptons, and should be able to pinpoint h→μμ in Run-2. However, h→τμ decays (and lepton flavor violation in general) are forbidden in the Standard Model, therefore a detection would be an evidence for exciting new physics around the corner. Last summer, CMS came up with their 8 TeV result showing a 2.4 sigma hint of the signal. Most likely, this is just another entry in the long list of statistical fluctuations in the LHC run-1 data. Nevertheless, the CMS result is quite intriguing, especially in connection with the LHCb hints of lepton flavor violation in B-meson decays. Therefore, we have been waiting impatiently for a word from ATLAS. ATLAS is taking his time, but finally they published the first chunk of the result based on hadronic tau decays. Unfortunately, it is very inconclusive. It shows a small 1 sigma upward fluctuation, hence it does not kill the CMS hint. At the same time, the combined significance of the h→τμ signal increases only marginally, up to 2.6 sigma.
So, we are still in a limbo. In the near future, ATLAS should reveal the 8 TeV h→τμ measurement with leptonic tau decays. This may clarify the situation, as the fully leptonic channel is more sensitive (at least, this is the case in the CMS analysis). But it is possible that for the final clarification we'll have to wait 2 more years, once enough 13 TeV data is analyzed.
It displays the results of ATLAS and CMS searches for h→τμ decays, together with their naive combination. The LHC collaborations have already observed Higgs boson decays into two 2 τ leptons, and should be able to pinpoint h→μμ in Run-2. However, h→τμ decays (and lepton flavor violation in general) are forbidden in the Standard Model, therefore a detection would be an evidence for exciting new physics around the corner. Last summer, CMS came up with their 8 TeV result showing a 2.4 sigma hint of the signal. Most likely, this is just another entry in the long list of statistical fluctuations in the LHC run-1 data. Nevertheless, the CMS result is quite intriguing, especially in connection with the LHCb hints of lepton flavor violation in B-meson decays. Therefore, we have been waiting impatiently for a word from ATLAS. ATLAS is taking his time, but finally they published the first chunk of the result based on hadronic tau decays. Unfortunately, it is very inconclusive. It shows a small 1 sigma upward fluctuation, hence it does not kill the CMS hint. At the same time, the combined significance of the h→τμ signal increases only marginally, up to 2.6 sigma.
So, we are still in a limbo. In the near future, ATLAS should reveal the 8 TeV h→τμ measurement with leptonic tau decays. This may clarify the situation, as the fully leptonic channel is more sensitive (at least, this is the case in the CMS analysis). But it is possible that for the final clarification we'll have to wait 2 more years, once enough 13 TeV data is analyzed.
Monday, 29 June 2015
Sit down and relaxion
New ideas are rare in particle physics these days. Solutions to the naturalness problem of the Higgs mass are true collector's items. For these reasons, the new mechanism addressing the naturalness problem via cosmological relaxation have stirred a lot of interest in the community. There's already an article explaining the idea in popular terms. Below, I will give you a more technical introduction.
In the Standard Model, the W and Z bosons and fermions get their masses via the Brout-Englert-Higgs mechanism. To this end, the Lagrangian contains a scalar field H with a negative mass squared V = - m^2 |H|^2. We know that the value of the parameter m is around 90 GeV - the Higgs boson mass divided by the square root of 2. In quantum field theory, the mass of a scalar particle is expected to be near the cut-off scale M of the theory, unless there's a symmetry protecting it from quantum corrections. On the other hand, m much smaller than M, without any reason or symmetry principle, constitutes the naturalness problem. Therefore, the dominant paradigm has been that, around the energy scale of 100 GeV, the Standard Model must be replaced by a new theory in which the parameter m is protected from quantum corrections. We know several mechanisms that could potentially protect the Higgs mass: supersymmetry, Higgs compositeness, the Goldstone mechanism, extra-dimensional gauge symmetry, and conformal symmetry. However, according to experimentalists, none seems to be realized at the weak scale; therefore, we need to accept that nature is fine-tuned (e.g. susy is just behind the corner), or to seek solace in religion (e.g. anthropics). Or to find a new solution to the naturalness problem: one that is not fine-tuned and is consistent with experimental data.
Relaxation is a genuinely new solution, even if somewhat contrived. It is based on the following ingredients:
Then the story goes as follows. The axion Φ starts at a large value such that the Higgs mass term is positive and there's no electroweak symmetry breaking. During inflation its value slowly decreases. Once gΦ < M^2, electroweak symmetry breaking is triggered and the Higgs field acquires a vacuum expectation value. The crucial point is that the height of the axion potential Λ depends on the light quark masses which in turn depend on the Higgs expectation value v. As the relaxion evolves, v increases, and Λ also increases proportionally, which provides the desired back-reaction. At some point, the slope of the axion potential is neutralized by the rising Λ, and the Higgs expectation value freezes in. The question is now quantitative: is it possible to arrange the freeze-in to happen at the value v well below the cut-off scale M? It turns out the answer is yes, at the cost of choosing strange (though not technically unnatural) theory parameters. In particular, the dimensionful coupling g between the relaxion and the Higgs has to be less than 10^-20 GeV (for a cut-off scale larger than 10 TeV), the inflation has to last for at least 10^40 e-folds, and the Hubble scale during inflation has to be smaller than the QCD scale.
The toy-model above ultimately fails. Normally, the QCD axion is introduced so that its expectation value cancels the CP violating θ-term in the Standard Model Lagrangian. But here it is stabilized at a value determined by its coupling to the Higgs field. Therefore, in the toy-model, the axion effectively generates an order one θ-term, in conflict with the experimental bound θ < 10^-10. Nevertheless, the same mechanism can be implemented in a realistic model. One possibility is to add new QCD-like interactions with its own axion playing the relaxion role. In addition, one needs new "quarks" charged under the new strong interactions. These masses have to be sensitive to the electroweak scale v, thus providing a back-reaction on the axion potential that terminates its evolution. In such a model, the quantitative details would be a bit different than in the QCD axion toy-model. However, the "strangeness" of the parameters persists in any model constructed so far. Especially, the very low scale of inflation required by the relaxation mechanism is worrisome. Could it be that the naturalness problem is just swept into the realm of poorly understood physics of inflation? The ultimate verdict thus depends on whether a complete and healthy model incorporating both relaxation and inflation can be constructed.
Certainly TBC.
Thanks to Brian for a great tutorial.
In the Standard Model, the W and Z bosons and fermions get their masses via the Brout-Englert-Higgs mechanism. To this end, the Lagrangian contains a scalar field H with a negative mass squared V = - m^2 |H|^2. We know that the value of the parameter m is around 90 GeV - the Higgs boson mass divided by the square root of 2. In quantum field theory, the mass of a scalar particle is expected to be near the cut-off scale M of the theory, unless there's a symmetry protecting it from quantum corrections. On the other hand, m much smaller than M,
Relaxation is a genuinely new solution, even if somewhat contrived. It is based on the following ingredients:
- The Higgs mass term in the potential is V = M^2 |H|^2. That is to say, the magnitude of the mass term is close to the cut-off of the theory, as suggested by the naturalness arguments.
- The Higgs field is coupled to a new scalar field - the relaxion - whose vacuum expectation value is time-dependent in the early universe, effectively changing the Higgs mass squared during its evolution.
- When the mass squared turns negative and electroweak symmetry is broken, a back-reaction mechanism should prevent further time evolution of the relaxion, so that the Higgs mass terms is frozen at a seemingly unnatural value.
Then the story goes as follows. The axion Φ starts at a large value such that the Higgs mass term is positive and there's no electroweak symmetry breaking. During inflation its value slowly decreases. Once gΦ < M^2, electroweak symmetry breaking is triggered and the Higgs field acquires a vacuum expectation value. The crucial point is that the height of the axion potential Λ depends on the light quark masses which in turn depend on the Higgs expectation value v. As the relaxion evolves, v increases, and Λ also increases proportionally, which provides the desired back-reaction. At some point, the slope of the axion potential is neutralized by the rising Λ, and the Higgs expectation value freezes in. The question is now quantitative: is it possible to arrange the freeze-in to happen at the value v well below the cut-off scale M?
Certainly TBC.
Thanks to Brian for a great tutorial.
Saturday, 13 June 2015
On the LHC diboson excess
The ATLAS diboson resonance search showing a 3.4 sigma excess near 2 TeV has stirred some interest. This is understandable: 3 sigma does not grow on trees, and moreover CMS also reported anomalies in related analyses. Therefore it is worth looking at these searches in a bit more detail in order to gauge how excited we should be.
The ATLAS one is actually a dijet search: it focuses on events with two very energetic jets of hadrons. More often than not, W and Z boson decay to quarks. When a TeV-scale resonance decays to electroweak bosons, the latter, by energy conservation, have to move with large velocities. As a consequence, the 2 quarks from W or Z boson decays will be very collimated and will be seen as a single jet in the detector. Therefore, ATLAS looks for dijet events where 1) the mass of each jet is close to that of W (80±13 GeV) or Z (91±13 GeV), and 2) the invariant mass of the dijet pair is above 1 TeV. Furthermore, they look into the substructure of the jets, so as to identify the ones that look consistent with W or Z decays. After all this work, most of the events still originate from ordinary QCD production of quarks and gluons, which gives a smooth background falling with the dijet invariant mass. If LHC collisions lead to a production of a new particle that decays to WW, WZ, or ZZ final states, it should show as a bump on top of the QCD background. ATLAS observes is this:
There is a bump near 2 TeV, which could indicate the existence of a particle decaying to WW and/or WZ and/or ZZ. One important thing to be aware of is that this search cannot distinguish well between the above 3 diboson states. The difference between W and Z masses is only 10 GeV, and the jet mass windows used in the search for W and Z partly overlap. In fact, 20% of the events fall into all 3 diboson categories. For all we know, the excess could be in just one final state, say WZ, and simply feed into the other two due to the overlapping selection criteria.
Given the number of searches that ATLAS and CMS have made, 3 sigma fluctuations of the background should happen a few times in the LHC run-1 just by sheer chance. The interest in the ATLAS excess is however amplified by the fact that diboson searches in CMS also show anomalies (albeit smaller) just below 2 TeV. This can be clearly seen on this plot with limits on the Randall-Sundrum graviton excitation, which is one particular model leading to diboson resonances. As W and Z bosons sometimes decay to, respectively, one and two charged leptons, diboson resonances can be searched for not only via dijets but also in final states with one or two leptons. One can see that, in CMS, the ZZ dilepton search (blue line), the WW/ZZ dijet search (green line), and the WW/WZ one-lepton (red line) search all report a small (between 1 and 2 sigma) excess around 1.8 TeV. To make things even more interesting, the CMS search for WH resonances return 3 events clustering at 1.8 TeV where the standard model background is very small (see Tommaso's post). Could the ATLAS and CMS events be due to the same exotic physics?
Unfortunately, building a model explaining all the diboson data is not easy. Enough to say that the ATLAS excess has been out for a week and there's isn't yet any serious ambulance chasing paper on arXiv. One challenge is the event rate. To fit the excess, the resonance should be produced with a cross section of order 10 femtobarns. This requires the new particle to couple quite strongly to light quarks (or gluons), at least as strong as the W and Z bosons. At the same time, it should remain a narrow resonance decaying dominantly to dibosons. Furthermore, in concrete models, a sizable coupling to electroweak gauge bosons will get you in trouble with electroweak precision tests.
However, there is yet a bigger problem, which can be also seen in the plot above. Although the excesses in CMS occur roughly at the same mass, they are not compatible when it comes to the cross section. And so the limits in the single-lepton search are not consistent with the new particle interpretation of the excess in dijet and the dilepton searches, at least in the context of the Randall-Sundrum graviton model. Moreover, the limits from the CMS one-lepton search are grossly inconsistent with the diboson interpretation of the ATLAS excess! In order to believe that the ATLAS 3 sigma excess is real one has to move to much more baroque models. One possibility is that the dijets observed by ATLAS do not originate from electroweak bosons, but rather from an exotic particle with a similar mass. Another possibility is that the resonance decays only to a pair of Z bosons and not to W bosons, in which case the CMS limits are weaker; but I'm not sure if there exist consistent models with this property.
My conclusion... For sure this is something to observe in the early run-2. If this is real, it should clearly show in both experiments already this year. However, due to the inconsistencies between different search channels and the theoretical challenges, there's little reason to get excited yet.
Thanks to Chris for digging out the CMS plot.
The ATLAS one is actually a dijet search: it focuses on events with two very energetic jets of hadrons. More often than not, W and Z boson decay to quarks. When a TeV-scale resonance decays to electroweak bosons, the latter, by energy conservation, have to move with large velocities. As a consequence, the 2 quarks from W or Z boson decays will be very collimated and will be seen as a single jet in the detector. Therefore, ATLAS looks for dijet events where 1) the mass of each jet is close to that of W (80±13 GeV) or Z (91±13 GeV), and 2) the invariant mass of the dijet pair is above 1 TeV. Furthermore, they look into the substructure of the jets, so as to identify the ones that look consistent with W or Z decays. After all this work, most of the events still originate from ordinary QCD production of quarks and gluons, which gives a smooth background falling with the dijet invariant mass. If LHC collisions lead to a production of a new particle that decays to WW, WZ, or ZZ final states, it should show as a bump on top of the QCD background. ATLAS observes is this:
There is a bump near 2 TeV, which could indicate the existence of a particle decaying to WW and/or WZ and/or ZZ. One important thing to be aware of is that this search cannot distinguish well between the above 3 diboson states. The difference between W and Z masses is only 10 GeV, and the jet mass windows used in the search for W and Z partly overlap. In fact, 20% of the events fall into all 3 diboson categories. For all we know, the excess could be in just one final state, say WZ, and simply feed into the other two due to the overlapping selection criteria.
Given the number of searches that ATLAS and CMS have made, 3 sigma fluctuations of the background should happen a few times in the LHC run-1 just by sheer chance. The interest in the ATLAS excess is however amplified by the fact that diboson searches in CMS also show anomalies (albeit smaller) just below 2 TeV. This can be clearly seen on this plot with limits on the Randall-Sundrum graviton excitation, which is one particular model leading to diboson resonances. As W and Z bosons sometimes decay to, respectively, one and two charged leptons, diboson resonances can be searched for not only via dijets but also in final states with one or two leptons. One can see that, in CMS, the ZZ dilepton search (blue line), the WW/ZZ dijet search (green line), and the WW/WZ one-lepton (red line) search all report a small (between 1 and 2 sigma) excess around 1.8 TeV. To make things even more interesting, the CMS search for WH resonances return 3 events clustering at 1.8 TeV where the standard model background is very small (see Tommaso's post). Could the ATLAS and CMS events be due to the same exotic physics?
Unfortunately, building a model explaining all the diboson data is not easy. Enough to say that the ATLAS excess has been out for a week and there's isn't yet any serious ambulance chasing paper on arXiv. One challenge is the event rate. To fit the excess, the resonance should be produced with a cross section of order 10 femtobarns. This requires the new particle to couple quite strongly to light quarks (or gluons), at least as strong as the W and Z bosons. At the same time, it should remain a narrow resonance decaying dominantly to dibosons. Furthermore, in concrete models, a sizable coupling to electroweak gauge bosons will get you in trouble with electroweak precision tests.
However, there is yet a bigger problem, which can be also seen in the plot above. Although the excesses in CMS occur roughly at the same mass, they are not compatible when it comes to the cross section. And so the limits in the single-lepton search are not consistent with the new particle interpretation of the excess in dijet and the dilepton searches, at least in the context of the Randall-Sundrum graviton model. Moreover, the limits from the CMS one-lepton search are grossly inconsistent with the diboson interpretation of the ATLAS excess! In order to believe that the ATLAS 3 sigma excess is real one has to move to much more baroque models. One possibility is that the dijets observed by ATLAS do not originate from electroweak bosons, but rather from an exotic particle with a similar mass. Another possibility is that the resonance decays only to a pair of Z bosons and not to W bosons, in which case the CMS limits are weaker; but I'm not sure if there exist consistent models with this property.
My conclusion... For sure this is something to observe in the early run-2. If this is real, it should clearly show in both experiments already this year. However, due to the inconsistencies between different search channels and the theoretical challenges, there's little reason to get excited yet.
Thanks to Chris for digging out the CMS plot.
Saturday, 30 May 2015
Weekend Plot: Higgs mass and SUSY
This weekend's plot shows the region in the stop mass and mixing space of the MSSM that reproduces the measured Higgs boson mass of 125 GeV:
Unlike in the Standard Model, in the minimal supersymmetric extension of the Standard Model (MSSM) the Higgs boson mass is not a free parameter; it can be calculated given all masses and couplings of the supersymmetric particles. At the lowest order, it is equal to the Z bosons mass 91 GeV (for large enough tanβ). To reconcile the predicted and the observed Higgs mass, one needs to invoke large loop corrections due to supersymmetry breaking. These are dominated by the contribution of the top quark and its 2 scalar partners (stops) which couple most strongly of all particles to the Higgs. As can be seen in the plot above, the stop mass preferred by the Higgs mass measurement is around 10 TeV. With a little bit of conspiracy, if the mixing between the two stops is just right, this can be lowered to about 2 TeV. In any case, this means that, as long as the MSSM is the correct theory, there is little chance to discover the stops at the LHC.
This conclusion may be surprising because previous calculations were painting a more optimistic picture. The results above are derived with the new SUSYHD code, which utilizes effective field theory techniques to compute the Higgs mass in the presence of heavy supersymmetric particles. Other frequently used codes, such as FeynHiggs or Suspect, obtain a significantly larger Higgs mass for the same supersymmetric spectrum, especially near the maximal mixing point. The difference can be clearly seen in the plot to the right (called the boobs plot by some experts). Although there is a debate about the size of the error as estimated by SUSYHD, other effective theory calculations report the same central values.
Unlike in the Standard Model, in the minimal supersymmetric extension of the Standard Model (MSSM) the Higgs boson mass is not a free parameter; it can be calculated given all masses and couplings of the supersymmetric particles. At the lowest order, it is equal to the Z bosons mass 91 GeV (for large enough tanβ). To reconcile the predicted and the observed Higgs mass, one needs to invoke large loop corrections due to supersymmetry breaking. These are dominated by the contribution of the top quark and its 2 scalar partners (stops) which couple most strongly of all particles to the Higgs. As can be seen in the plot above, the stop mass preferred by the Higgs mass measurement is around 10 TeV. With a little bit of conspiracy, if the mixing between the two stops is just right, this can be lowered to about 2 TeV. In any case, this means that, as long as the MSSM is the correct theory, there is little chance to discover the stops at the LHC.
This conclusion may be surprising because previous calculations were painting a more optimistic picture. The results above are derived with the new SUSYHD code, which utilizes effective field theory techniques to compute the Higgs mass in the presence of heavy supersymmetric particles. Other frequently used codes, such as FeynHiggs or Suspect, obtain a significantly larger Higgs mass for the same supersymmetric spectrum, especially near the maximal mixing point. The difference can be clearly seen in the plot to the right (called the boobs plot by some experts). Although there is a debate about the size of the error as estimated by SUSYHD, other effective theory calculations report the same central values.
Thursday, 21 May 2015
How long until it's interesting?
Last night, for the first time, the LHC collided particles at the center-of-mass energy of 13 TeV. Routine collisions should follow early in June. The plan is to collect 5-10 inverse femtobarn (fb-1) of data before winter comes, adding to the 25 fb-1 from Run-1. It's high time dust off your Madgraph and tool up for what may be the most exciting time in particle physics in this century. But when exactly should we start getting excited? When should we start friending LHC experimentalists on facebook? When is the time to look over their shoulders for a glimpse of of gluinos popping out of the detectors. One simple way to estimate the answer is to calculate what is the luminosity when the number of particles produced at 13 TeV will exceed that produced during the whole Run-1. This depends on the ratio of the production cross sections at 13 and 8 TeV which is of course strongly dependent on the particle's mass and production mechanism. Moreover, the LHC discovery potential will also depend on how the background processes change, and on a host of other experimental issues. Nevertheless, let us forget for a moment about the fine-print, and calculate the ratio of 13 and 8 TeV cross sections for a few particles popular among the general public. This will give us a rough estimate of the threshold luminosity when things should get interesting.
In summary, the progress will be very fast for new heavy particles. In particular, for gluon-initiated production of TeV-scale particles already the first inverse femtobarn may bring us into a new territory. For lighter particles the progress will be slower, especially when backgrounds are difficult. On the other hand, precision physics, such as the Higgs couplings measurements, is unlikely to be in the spotlight this year.
- Higgs boson: Ratio≈2.3; Luminosity≈10 fb-1.
Higgs physics will not be terribly exciting this year, with only a modest improvement of the couplings measurements expected. - tth: Ratio≈4; Luminosity≈6 fb-1.
Nevertheless, for certain processes involving the Higgs boson the improvement may be a bit faster. In particular, the theoretically very important process of Higgs production in association with top quarks (tth) was on the verge of being detected in Run-1. If we're lucky, this year's data may tip the scale and provide an evidence for a non-zero top Yukawa couplings. - 300 GeV Higgs partner: Ratio≈2.7 Luminosity≈9 fb-1.
Not much hope for new scalars in the Higgs family this year. - 800 GeV stops: Ratio≈10; Luminosity≈2 fb-1.
800 GeV is close to the current lower limit on the mass of a scalar top partner decaying to a top quark and a massless neutralino. In this case, one should remember that backgrounds also increase at 13 TeV, so the progress will be a bit slower than what the above number suggests. Nevertheless, this year we will certainly explore new parameter space and make the naturalness problem even more severe. Similar conclusions hold for a fermionic top partner. - 3 TeV Z' boson: Ratio≈18; Luminosity≈1.2 fb-1.
Getting interesting! Limits on Z' bosons decaying to leptons will be improved very soon; moreover, in this case background is not an issue. - 1.4 TeV gluino: Ratio≈30; Luminosity≈0.7 fb-1.
If all goes well, better limits on gluinos can be delivered by the end of the summer!
In summary, the progress will be very fast for new heavy particles. In particular, for gluon-initiated production of TeV-scale particles already the first inverse femtobarn may bring us into a new territory. For lighter particles the progress will be slower, especially when backgrounds are difficult. On the other hand, precision physics, such as the Higgs couplings measurements, is unlikely to be in the spotlight this year.
Friday, 8 May 2015
Weekend plot: minimum BS conjecture
This weekend plot completes my last week's post:
It shows the phase diagram for models of natural electroweak symmetry breaking. These models can be characterized by 2 quantum numbers:
To allow for a fair comparison, in all models the cut-off scale is fixed to Λ=10 TeV. The standard model (SM) has, by definition, B=1, while S≈(Λ/mZ)^2≈10^4. The principle of naturalness postulates that S should be much smaller, S ≲ 10. This requires introducing new hypothetical particles and interactions, therefore inevitably increasing B.
The most popular approach to reducing S is by introducing supersymmetry. The minimal supersymmetric standard model (MSSM) does not make fine-tuning better than 10^3 in the bulk of its parameter space. To improve on that, one needs to introduce large A-terms (aMSSM), or R-parity breaking interactions (RPV), or an additional scalar (NMSSM). Another way to decrease S is achieved in models the Higgs arises as a composite Goldstone boson of new strong interactions. Unfortunately, in all of those models, S cannot be smaller than 10^2 due to phenomenological constraints from colliders. To suppress S even further, one has to resort to the so-called neutral naturalness, where new particles beyond the standard model are not charged under the SU(3) color group. The twin Higgs - the simplest model of neutral naturalness - can achieve S≈10 at the cost of introducing a whole parallel mirror world.
The parametrization proposed here leads to a striking observation. While one can increase B indefinitely (many examples have been proposed the literature), for a given S there seems to be a minimum value of B below which no models exist. In fact, the conjecture is that the product B*S is bounded from below:
It shows the phase diagram for models of natural electroweak symmetry breaking. These models can be characterized by 2 quantum numbers:
- B [Baroqueness], describing how complicated is the model relative to the standard model;
- S [Specialness], describing the fine-tuning needed to achieve electroweak symmetry breaking with the observed Higgs boson mass.
To allow for a fair comparison, in all models the cut-off scale is fixed to Λ=10 TeV. The standard model (SM) has, by definition, B=1, while S≈(Λ/mZ)^2≈10^4. The principle of naturalness postulates that S should be much smaller, S ≲ 10. This requires introducing new hypothetical particles and interactions, therefore inevitably increasing B.
The most popular approach to reducing S is by introducing supersymmetry. The minimal supersymmetric standard model (MSSM) does not make fine-tuning better than 10^3 in the bulk of its parameter space. To improve on that, one needs to introduce large A-terms (aMSSM), or R-parity breaking interactions (RPV), or an additional scalar (NMSSM). Another way to decrease S is achieved in models the Higgs arises as a composite Goldstone boson of new strong interactions. Unfortunately, in all of those models, S cannot be smaller than 10^2 due to phenomenological constraints from colliders. To suppress S even further, one has to resort to the so-called neutral naturalness, where new particles beyond the standard model are not charged under the SU(3) color group. The twin Higgs - the simplest model of neutral naturalness - can achieve S≈10 at the cost of introducing a whole parallel mirror world.
The parametrization proposed here leads to a striking observation. While one can increase B indefinitely (many examples have been proposed the literature), for a given S there seems to be a minimum value of B below which no models exist. In fact, the conjecture is that the product B*S is bounded from below:
BS ≳ 10^4.
One robust prediction of the minimum BS conjecture is the existence of a very complicated (B=10^4) yet to be discovered model with no fine-tuning at all. The take-home message is that one should always try to minimize BS, even if for fundamental reasons it cannot be avoided completely ;)Wednesday, 6 May 2015
Naturalness' last bunker
Last week Symmetry Breaking ran the article entitled "Natural SUSY's last stand". That title is a bit misleading as it makes you think of General Custer at the eve of Battle of the Little Bighorn, whereas natural supersymmetry has long been dead bodies torn by vultures. Nevertheless, it is interesting to ask a more general question: are there any natural theories that survived? And if yes, what can we learn about them from the LHC run-2?
For over 30 years naturalness has been the guiding principle in theoretical particle physics. The standard model by itself has no naturalness problem: it contains 19 free parameters that are simply not calculable and have to be taken from experiment. The problem arises because we believe the standard model is eventually embedded in a more fundamental theory where all these parameters, including the Higgs boson mass, are calculable. Once that is done, the calculated Higgs mass will typically be proportional to the heaviest state in that theory as a result of quantum corrections. The exception to this rule is when the fundamental theory possesses a symmetry forbidding the Higgs mass, in which case the mass will be proportional to the scale where the symmetry becomes manifest. Given the Higgs mass is 125 GeV, the concept of naturalness leads to the following prediction: 1) new particles beyond the standard model should appear around the mass scale of 100-300 GeV, and 2) the new theory with the new particles should have a protection mechanism for the Higgs mass built in.
There are two main realizations of this idea. In supersymmetry, the protection is provided by opposite-spin partners of the known particles. In particular, the top quark is accompanied by stop quarks who are spin-0 scalars but otherwise they have the same color and electric charge as the top quark. Another protection mechanism can be provided by a spontaneously broken global symmetry, usually realized in the context of new strong interactions from which the Higgs arises as a composite particle. In that case, the protection is provided by the same spin partners, for example the top quark has a fermionic partner with the same quantum numbers but a different mass.
Both of these ideas are theoretically very attractive but are difficult to realize in practice. First of all, it is hard to understand how these 100 new partner particles could be hiding around the corner without leaving any trace in numerous precision experiments. But even if we were willing to believe in the Universal conspiracy, the LHC run-1 was the final nail in the coffin. The point is that both of these scenarios make a very specific prediction: the existence of new particles with color charges around the weak scale. As the LHC is basically a quark and gluon collider, it can produce colored particles in large quantities. For example, for a 1 TeV gluino (supersymmetric partner of the gluon) some 1000 pairs would have been already produced at the LHC. Thanks to the large production rate, the limits on colored partners are already quite stringent. For example, the LHC limits on masses of gluinos and massive spin-1 gluon resonances extend well above 1 TeV, while for scalar and fermionic top partners the limits are not far below 1 TeV. This means that a conspiracy theory is not enough: in supersymmetry and composite Higgs one also has to accept a certain degree of fine-tuning, which means we don't even solve the problem that is the very motivation for these theories.
The reasoning above suggests a possible way out. What if naturalness could be realized without colored partners: without gluinos, stops, or heavy tops. The conspiracy problem would not go away, but at least we could avoid stringent limits from the LHC. It turns out that theories with such a property do exist. They linger away from the mainstream, but recently they have been gaining popularity under the name of the neutral naturalness. The reason for that is obvious: such theories may offer a nuclear bunker that will allow naturalness to survive beyond the LHC run-2.
The best known realization of neutral naturalness is the twin Higgs model. It assumes the existence of a mirror world, with mirror gluons, mirror top quarks, a mirror Higgs boson, etc., which is related to the standard model by an approximate parity symmetry. The parity gives rise to an accidental global symmetry that could protect the Higgs boson mass. At the technical level, the protection mechanism is similar as in composite Higgs models where standard model particles have partners with the same spins. The crucial difference, however, is that the mirror top quarks and mirror gluons are charged under the mirror color group, not the standard model color. As we don't have a mirror proton collider yet, the mirror partners are not produced in large quantities at the LHC. Therefore, they could well be as light as our top quark without violating any experimental bounds, and in agreement with the requirements of naturalness.
A robust prediction of twin-Higgs-like models is that the Higgs boson couplings to matter deviate from the standard model predictions, as a consequence of mixing with the mirror Higgs. The size of this deviation is of the same order as the fine-tuning in the theory, for example order 10% deviations are expected when the fine-tuning is 1 in 10. This is perhaps the best motivation for precision Higgs studies: measuring the Higgs couplings with an accuracy better than 10% may invalidate or boost the idea. However, the neutral naturalness points us to experimental signals that are often very different than in the popular models. For example, the mirror color interactions are expected to behave at low energies similarly to our QCD: there should be mirror mesons, baryons, glueballs. By construction, the Higgs boson must couple to the mirror world, and therefore it offers a portal via which the mirror hadronic junk can be produced and decay, which may lead to truly exotic signatures such as displaced jets. This underlines the importance to search for exotic Higgs boson decays - very few such studies have been carried out by the LHC experiments so far. Finally, as it has been speculated for long time, dark matter may have something to do the with the mirror world. Neutral naturalness provides a reason for the existence of the mirror world and an approximate parity symmetry relating it to the real world. It may be our best shot at understanding why the amounts of ordinary and dark matter in the Universe are equal up to a factor of 5 - something that arises as a complete accident in the usual WIMP dark matter scenario.
There's no doubt that the neutral naturalness is a desperate attempt to save natural electroweak symmetry breaking from the reality check, or at least postpone the inevitable. Nevertheless, the existence of a mirror world is certainly a logical possibility. The recent resurgence of this scenario has led to identifying new interesting models, and new ways to search for them in experiment. The persistence of the naturalness principle may thus be turned into a positive force, as it may motivate better searches for hidden particles. It is possible that the LHC data hold the answer to the naturalness puzzle, but we will have to look deeper to extract it.
For over 30 years naturalness has been the guiding principle in theoretical particle physics. The standard model by itself has no naturalness problem: it contains 19 free parameters that are simply not calculable and have to be taken from experiment. The problem arises because we believe the standard model is eventually embedded in a more fundamental theory where all these parameters, including the Higgs boson mass, are calculable. Once that is done, the calculated Higgs mass will typically be proportional to the heaviest state in that theory as a result of quantum corrections. The exception to this rule is when the fundamental theory possesses a symmetry forbidding the Higgs mass, in which case the mass will be proportional to the scale where the symmetry becomes manifest. Given the Higgs mass is 125 GeV, the concept of naturalness leads to the following prediction: 1) new particles beyond the standard model should appear around the mass scale of 100-300 GeV, and 2) the new theory with the new particles should have a protection mechanism for the Higgs mass built in.
There are two main realizations of this idea. In supersymmetry, the protection is provided by opposite-spin partners of the known particles. In particular, the top quark is accompanied by stop quarks who are spin-0 scalars but otherwise they have the same color and electric charge as the top quark. Another protection mechanism can be provided by a spontaneously broken global symmetry, usually realized in the context of new strong interactions from which the Higgs arises as a composite particle. In that case, the protection is provided by the same spin partners, for example the top quark has a fermionic partner with the same quantum numbers but a different mass.
Both of these ideas are theoretically very attractive but are difficult to realize in practice. First of all, it is hard to understand how these 100 new partner particles could be hiding around the corner without leaving any trace in numerous precision experiments. But even if we were willing to believe in the Universal conspiracy, the LHC run-1 was the final nail in the coffin. The point is that both of these scenarios make a very specific prediction: the existence of new particles with color charges around the weak scale. As the LHC is basically a quark and gluon collider, it can produce colored particles in large quantities. For example, for a 1 TeV gluino (supersymmetric partner of the gluon) some 1000 pairs would have been already produced at the LHC. Thanks to the large production rate, the limits on colored partners are already quite stringent. For example, the LHC limits on masses of gluinos and massive spin-1 gluon resonances extend well above 1 TeV, while for scalar and fermionic top partners the limits are not far below 1 TeV. This means that a conspiracy theory is not enough: in supersymmetry and composite Higgs one also has to accept a certain degree of fine-tuning, which means we don't even solve the problem that is the very motivation for these theories.
The reasoning above suggests a possible way out. What if naturalness could be realized without colored partners: without gluinos, stops, or heavy tops. The conspiracy problem would not go away, but at least we could avoid stringent limits from the LHC. It turns out that theories with such a property do exist. They linger away from the mainstream, but recently they have been gaining popularity under the name of the neutral naturalness. The reason for that is obvious: such theories may offer a nuclear bunker that will allow naturalness to survive beyond the LHC run-2.
The best known realization of neutral naturalness is the twin Higgs model. It assumes the existence of a mirror world, with mirror gluons, mirror top quarks, a mirror Higgs boson, etc., which is related to the standard model by an approximate parity symmetry. The parity gives rise to an accidental global symmetry that could protect the Higgs boson mass. At the technical level, the protection mechanism is similar as in composite Higgs models where standard model particles have partners with the same spins. The crucial difference, however, is that the mirror top quarks and mirror gluons are charged under the mirror color group, not the standard model color. As we don't have a mirror proton collider yet, the mirror partners are not produced in large quantities at the LHC. Therefore, they could well be as light as our top quark without violating any experimental bounds, and in agreement with the requirements of naturalness.
A robust prediction of twin-Higgs-like models is that the Higgs boson couplings to matter deviate from the standard model predictions, as a consequence of mixing with the mirror Higgs. The size of this deviation is of the same order as the fine-tuning in the theory, for example order 10% deviations are expected when the fine-tuning is 1 in 10. This is perhaps the best motivation for precision Higgs studies: measuring the Higgs couplings with an accuracy better than 10% may invalidate or boost the idea. However, the neutral naturalness points us to experimental signals that are often very different than in the popular models. For example, the mirror color interactions are expected to behave at low energies similarly to our QCD: there should be mirror mesons, baryons, glueballs. By construction, the Higgs boson must couple to the mirror world, and therefore it offers a portal via which the mirror hadronic junk can be produced and decay, which may lead to truly exotic signatures such as displaced jets. This underlines the importance to search for exotic Higgs boson decays - very few such studies have been carried out by the LHC experiments so far. Finally, as it has been speculated for long time, dark matter may have something to do the with the mirror world. Neutral naturalness provides a reason for the existence of the mirror world and an approximate parity symmetry relating it to the real world. It may be our best shot at understanding why the amounts of ordinary and dark matter in the Universe are equal up to a factor of 5 - something that arises as a complete accident in the usual WIMP dark matter scenario.
There's no doubt that the neutral naturalness is a desperate attempt to save natural electroweak symmetry breaking from the reality check, or at least postpone the inevitable. Nevertheless, the existence of a mirror world is certainly a logical possibility. The recent resurgence of this scenario has led to identifying new interesting models, and new ways to search for them in experiment. The persistence of the naturalness principle may thus be turned into a positive force, as it may motivate better searches for hidden particles. It is possible that the LHC data hold the answer to the naturalness puzzle, but we will have to look deeper to extract it.
Sunday, 26 April 2015
Weekend plot: dark photon update
Here is a late weekend plot with new limits on the dark photon parameter space:
The dark photon is a hypothetical massive spin-1 boson mixing with the ordinary photon. The minimal model is fully characterized by just 2 parameters: the mass mA' and the mixing angle ε. This scenario is probed by several different experiments using completely different techniques. It is interesting to observe how quickly the experimental constraints have been improving in the recent years. The latest update appeared a month ago thanks to the NA48 collaboration. NA48/2 was an experiment a decade ago at CERN devoted to studying CP violation in kaons. Kaons can decay to neutral pions, and the latter can be recycled into a nice probe of dark photons. Most often, π0 decays to two photons. If the dark photon is lighter than 135 MeV, one of the photons can mix into an on-shell dark photon, which in turn can decay into an electron and a positron. Therefore, NA48 analyzed the π0 → γ e+ e- decays in their dataset. Such pion decays occur also in the Standard Model, with an off-shell photon instead of a dark photon in the intermediate state. However, the presence of the dark photon would produce a peak in the invariant mass spectrum of the e+ e- pair on top of the smooth Standard Model background. Failure to see a significant peak allows one to set limits on the dark photon parameter space, see the dripping blood region in the plot.
So, another cute experiment bites into the dark photon parameter space. After this update, one can robustly conclude that the mixing angle in the minimal model has to be less than 0.001 as long as the dark photon is lighter than 10 GeV. This is by itself not very revealing, because there is no theoretically preferred value of ε or mA'. However, one interesting consequence the NA48 result is that it closes the window where the minimal model can explain the 3σ excess in the muon anomalous magnetic moment.
The dark photon is a hypothetical massive spin-1 boson mixing with the ordinary photon. The minimal model is fully characterized by just 2 parameters: the mass mA' and the mixing angle ε. This scenario is probed by several different experiments using completely different techniques. It is interesting to observe how quickly the experimental constraints have been improving in the recent years. The latest update appeared a month ago thanks to the NA48 collaboration. NA48/2 was an experiment a decade ago at CERN devoted to studying CP violation in kaons. Kaons can decay to neutral pions, and the latter can be recycled into a nice probe of dark photons. Most often, π0 decays to two photons. If the dark photon is lighter than 135 MeV, one of the photons can mix into an on-shell dark photon, which in turn can decay into an electron and a positron. Therefore, NA48 analyzed the π0 → γ e+ e- decays in their dataset. Such pion decays occur also in the Standard Model, with an off-shell photon instead of a dark photon in the intermediate state. However, the presence of the dark photon would produce a peak in the invariant mass spectrum of the e+ e- pair on top of the smooth Standard Model background. Failure to see a significant peak allows one to set limits on the dark photon parameter space, see the dripping blood region in the plot.
So, another cute experiment bites into the dark photon parameter space. After this update, one can robustly conclude that the mixing angle in the minimal model has to be less than 0.001 as long as the dark photon is lighter than 10 GeV. This is by itself not very revealing, because there is no theoretically preferred value of ε or mA'. However, one interesting consequence the NA48 result is that it closes the window where the minimal model can explain the 3σ excess in the muon anomalous magnetic moment.
Friday, 17 April 2015
Antiprotons from AMS
This week the AMS collaboration released the long expected measurement of the cosmic ray antiproton spectrum. Antiprotons are produced in our galaxy in collisions of high-energy cosmic rays with interstellar matter, the so-called secondary production. Annihilation of dark matter could add more antiprotons on top of that background, which would modify the shape of the spectrum with respect to the prediction from the secondary production. Unlike for cosmic ray positrons, in this case there should be no significant primary production in astrophysical sources such as pulsars or supernovae. Thanks to this, antiprotons could in principle be a smoking gun of dark matter annihilation, or at least a powerful tool to constrain models of WIMP dark matter.
The new data from the AMS-02 detector extend the previous measurements from PAMELA up to 450 GeV and significantly reduce experimental errors at high energies. Now, if you look at the promotional material, you may get an impression that a clear signal of dark matter has been observed. However, experts unanimously agree that the brown smudge in the plot above is just shit, rather than a range of predictions from the secondary production. At this point, there is certainly no serious hints for dark matter contribution to the antiproton flux. A quantitative analysis of this issue appeared in a paper today. Predicting the antiproton spectrum is subject to large experimental uncertainties about the flux of cosmic ray proton and about the nuclear cross sections, as well as theoretical uncertainties inherent in models of cosmic ray propagation. The data and the predictions are compared in this Jamaican band plot. Apparently, the new AMS-02 data are situated near the upper end of the predicted range.
Thus, there is no currently no hint of dark matter detection. However, the new data are extremely useful to constrain models of dark matter. New constraints on the annihilation cross section of dark matter are shown in the plot to the right. The most stringent limits apply to annihilation into b-quarks or into W bosons, which yield many antiprotons after decay and hadronization. The thermal production cross section - theoretically preferred in a large class of WIMP dark matter models - is in the case of b-quarks excluded for the mass of the dark matter particle below 150 GeV. These results provide further constraints on models addressing the hooperon excess in the gamma ray emission from the galactic center.
More experimental input will allow us to tune the models of cosmic ray propagation to better predict the background. That, in turn, should lead to more stringent limits on dark matter. Who knows... maybe a hint for dark matter annihilation will emerge one day from this data; although, given the uncertainties, it's unlikely to ever be a smoking gun.
Thanks to Marco for comments and plots.
The new data from the AMS-02 detector extend the previous measurements from PAMELA up to 450 GeV and significantly reduce experimental errors at high energies. Now, if you look at the promotional material, you may get an impression that a clear signal of dark matter has been observed. However, experts unanimously agree that the brown smudge in the plot above is just shit, rather than a range of predictions from the secondary production. At this point, there is certainly no serious hints for dark matter contribution to the antiproton flux. A quantitative analysis of this issue appeared in a paper today. Predicting the antiproton spectrum is subject to large experimental uncertainties about the flux of cosmic ray proton and about the nuclear cross sections, as well as theoretical uncertainties inherent in models of cosmic ray propagation. The data and the predictions are compared in this Jamaican band plot. Apparently, the new AMS-02 data are situated near the upper end of the predicted range.
Thus, there is no currently no hint of dark matter detection. However, the new data are extremely useful to constrain models of dark matter. New constraints on the annihilation cross section of dark matter are shown in the plot to the right. The most stringent limits apply to annihilation into b-quarks or into W bosons, which yield many antiprotons after decay and hadronization. The thermal production cross section - theoretically preferred in a large class of WIMP dark matter models - is in the case of b-quarks excluded for the mass of the dark matter particle below 150 GeV. These results provide further constraints on models addressing the hooperon excess in the gamma ray emission from the galactic center.
More experimental input will allow us to tune the models of cosmic ray propagation to better predict the background. That, in turn, should lead to more stringent limits on dark matter. Who knows... maybe a hint for dark matter annihilation will emerge one day from this data; although, given the uncertainties, it's unlikely to ever be a smoking gun.
Thanks to Marco for comments and plots.
Wednesday, 1 April 2015
What If, Part 1
This is the do-or-die year, so Résonaances will be dead serious. This year, no stupid jokes on April Fools' day: no Higgs in jail, no loose cables, no discovery of supersymmetry, or such. Instead, I'm starting with a new series "What If" inspired by XKCD. In this series I will answer questions that everyone is dying to know the answer to. The first of these questions is
Here is the answer.
In preparation:
-If theoretical physicists were smurfs...
-If LHC experimentalists were Game of Thrones characters...
-If particle physicists lived in Middle-earth...
-If physicists were cast for Hobbit's dwarves...
and more.
If HEP bloggers were Muppets,
which Muppet would they be?
which Muppet would they be?
Here is the answer.
- Gonzo the Great: Lubos@Reference Frame (on odd-numbered days)
The one true uncompromising artist. Not treated seriously by other Muppets, but adored by chicken. - Animal: Lubos@Reference Frame (on even-numbered days)
My favorite Muppet. Pure mayhem and destruction. Has only two modes: beat it, or eat it. - Swedish Chef: Tommaso@Quantum Diaries Survivor
The Muppet with a penchant for experiment. No one understands what he says but it's always amusing nonetheless. - Kermit the Frog: Matt@Of Particular Significance
Born Muppet leader, though not clear if he really wants the job. - Miss Piggy: Sabine@Backreaction
Not the only female Muppet, but certainly the best known. Admired for her stage talents but most of all for her punch. - Rowlf: Sean@Preposterous Universe
The real star and one-Muppet orchestra. Impressive as an artist or and as a comedian, though some complain he's gone to the dogs.
- Statler and Waldorf: Peter@Not Even Wrong
Constantly heckling other Muppets from the balcony, yet every week back for more. - Fozzie Bear: Jester@Résonaances
Failed stand-up comedian. Always stressed that he may not be funny after all.
In preparation:
-If theoretical physicists were smurfs...
-If LHC experimentalists were Game of Thrones characters...
-If particle physicists lived in Middle-earth...
-If physicists were cast for Hobbit's dwarves...
and more.
Friday, 20 March 2015
LHCb: B-meson anomaly persists
Today LHCb released a new analysis of the angular distribution in the B0 → K*0(892) (→K+π-) μ+ μ- decays. In this 4-body decay process, the angles between the direction of flight of all the different particles can be measured as a function of the invariant mass q^2 of the di-muon pair. The results are summarized in terms of several form factors with imaginative names like P5', FL, etc. The interest in this particular decay comes from the fact that 2 years ago LHCb reported a large deviation from the standard model prediction in one q^2 region of 1 form factor called P5'. That measurement was based on 1 inverse femtobarn of data; today it was updated to full 3 fb-1 of run-1 data. The news is that the anomaly persists in the q^2 region 4-8 GeV, see the plot. The measurement moved a bit toward the standard model, but the statistical errors have shrunk as well. All in all, the significance of the anomaly is quoted as 3.7 sigma, the same as in the previous LHCb analysis. New physics that effectively induces new contributions to the 4-fermion operator (\bar b_L \gamma_\rho s_L) (\bar \mu \gamma_\rho \mu) can significantly improve agreement with the data, see the blue line in the plot. The preference for new physics remains remains high, at the 4 sigma level, when this measurement is combined with other B-meson observables.
So how excited should we be? One thing we learned today is that the anomaly is unlikely to be a statistical fluctuation. However, the observable is not of the clean kind, as the measured angular distributions are susceptible to poorly known QCD effects. The significance depends a lot on what is assumed about these uncertainties, and experts wage ferocious battles about the numbers. See for example this paper where larger uncertainties are advocated, in which case the significance becomes negligible. Therefore, the deviation from the standard model is not yet convincing at this point. Other observables may tip the scale. If a consistent pattern of deviations in several B-physics observables emerges, only then we can trumpet victory.
Plots borrowed from David Straub's talk in Moriond; see also the talk of Joaquim Matias with similar conclusions. David has a post with more details about the process and uncertainties. For a more popular write-up, see this article on Quanta Magazine.
Plots borrowed from David Straub's talk in Moriond; see also the talk of Joaquim Matias with similar conclusions. David has a post with more details about the process and uncertainties. For a more popular write-up, see this article on Quanta Magazine.
Saturday, 14 March 2015
Weekend Plot: Fermi and more dwarfs
This weekend's plot comes from the recent paper of the Fermi collaboration:
It shows the limits on the cross section of dark matter annihilation into tau lepton pairs. The limits are obtained from gamma-ray observations of 15 dwarf galaxies during 6 years. Dwarf galaxies are satellites of Milky Way made mostly of dark matter with few stars in it, which makes them a clean environment to search for dark matter signals. This study is particularly interesting because it is sensitive to dark matter models that could explain the gamma-ray excess detected from the center of the Milky Way. Similar limits for the annihilation into b-quarks have already been shown before at conferences. In that case, the region favored by the Galactic center excess seems entirely excluded. Annihilation of 10 GeV dark matter into tau leptons could also explain the excess. As can be seen in the plot, in this case there is also large tension with the dwarf limits, although astrophysical uncertainties help to keep hopes alive.
Gamma-ray observations by Fermi will continue for another few years, and the limits will get stronger. But a faster way to increase the statistics may be to find more observation targets. Numerical simulations with vanilla WIMP dark matter predict a few hundred dwarfs around the Milky Way. Interestingly, a discovery of several new dwarf candidates was reported last week. This is an important development, as the total number of known dwarf galaxies now exceeds the number of dwarf characters in Peter Jackson movies. One of the candidates, known provisionally as DES J0335.6-5403 or Reticulum-2, has a large J-factor (the larger the better, much like the h-index). In fact, some gamma-ray excess around 1-10 GeV is observed from this source, and one paper last week even quantified its significance as ~4 astrosigma (or ~3 astrosigma in an alternative more conservative analysis). However, in the Fermi analysis using more recent reconstruction Pass-8 photon reconstruction, the significance quoted is only 1.5 sigma. Moreover the dark matter annihilation cross section required to fit the excess is excluded by an order of magnitude by the combined dwarf limits. Therefore, for the moment, the excess should not be taken seriously.
It shows the limits on the cross section of dark matter annihilation into tau lepton pairs. The limits are obtained from gamma-ray observations of 15 dwarf galaxies during 6 years. Dwarf galaxies are satellites of Milky Way made mostly of dark matter with few stars in it, which makes them a clean environment to search for dark matter signals. This study is particularly interesting because it is sensitive to dark matter models that could explain the gamma-ray excess detected from the center of the Milky Way. Similar limits for the annihilation into b-quarks have already been shown before at conferences. In that case, the region favored by the Galactic center excess seems entirely excluded. Annihilation of 10 GeV dark matter into tau leptons could also explain the excess. As can be seen in the plot, in this case there is also large tension with the dwarf limits, although astrophysical uncertainties help to keep hopes alive.
Gamma-ray observations by Fermi will continue for another few years, and the limits will get stronger. But a faster way to increase the statistics may be to find more observation targets. Numerical simulations with vanilla WIMP dark matter predict a few hundred dwarfs around the Milky Way. Interestingly, a discovery of several new dwarf candidates was reported last week. This is an important development, as the total number of known dwarf galaxies now exceeds the number of dwarf characters in Peter Jackson movies. One of the candidates, known provisionally as DES J0335.6-5403 or Reticulum-2, has a large J-factor (the larger the better, much like the h-index). In fact, some gamma-ray excess around 1-10 GeV is observed from this source, and one paper last week even quantified its significance as ~4 astrosigma (or ~3 astrosigma in an alternative more conservative analysis). However, in the Fermi analysis using more recent reconstruction Pass-8 photon reconstruction, the significance quoted is only 1.5 sigma. Moreover the dark matter annihilation cross section required to fit the excess is excluded by an order of magnitude by the combined dwarf limits. Therefore, for the moment, the excess should not be taken seriously.
Subscribe to:
Posts (Atom)