What does the theory say about when we will find dark matter? It is perfectly viable that the discovery is waiting for us just behind the corner in the remaining space above the neutrino floor, but currently there's no strong theoretical hints in favor of that possibility. Usually, dark matter experiments advertise that they're just beginning to explore the interesting parameter space predicted by theory models.This is not quite correct. If the WIMP were true to its name, that is to say if it was interacting via the weak force (meaning, coupled to Z with order 1 strength), it would have order 10 fb scattering cross section on neutrons. Unfortunately, that natural possibility was excluded in the previous century. Years of experimental progress have shown that the WIMPs, if they exist, must be interacting super-weakly with matter. For example, for a 100 GeV fermionic dark matter with the vector coupling g to the Z boson, the current limits imply g ≲ 10^-4. The coupling can be larger if the Higgs boson is the mediator of interactions between the dark and visible worlds, as the Higgs already couples very weakly to nucleons. This construction is, arguably, the most plausible one currently probed by direct detection experiments. For a scalar dark matter particle X with mass 0.1-1 TeV coupled to the Higgs via the interaction λ v h |X|^2 the experiments are currently probing the coupling λ in the 0.01-1 ballpark. In general, there's no theoretical lower limit on the dark matter coupling to nucleons. Nevertheless, the weak coupling implied by direct detection limits creates some tension for the thermal production paradigm, which requires a weak (that is order picobarn) annihilation cross section for dark matter particles. This tension needs to be resolved by more complicated model building, e.g. by arranging for resonant annihilation or for co-annihilation.
Sunday, 11 September 2016
Weekend Plot: update on WIMPs
There's been a lot of discussion on this blog about the LHC not finding new physics. I should however give justice to other experiments that also don't find new physics, often in a spectacular way. One area where this is happening is direct detection of WIMP dark matter. This weekend plot summarizes the current limits on the spin-independent scattering cross-section of dark matter particles on nucleons:
For large WIMP masses, currently the most succesful detection technology is to fill up a tank with a ton of liquid xenon and wait for a passing dark matter particle to knock one of the nuclei. Recently, we have had updates from two such experiments: LUX in the US, and PandaX in China, whose limits now cut below zeptobarn cross sections (1 zb = 10^-9 pb = 10^-45 cm^2). These two experiments are currently going head-to-head, but Panda, being larger, will ultimately overtake LUX. Soon, however, it'll have to face a new fierce competitor: the XENON1T experiment, and the plot will have to be updated next year. Fortunately, we won't need to be learning another prefix soon. Once yoctobarn sensitivity is achieved by the experiments, we will hit the neutrino floor: the non-reducible background from solar and atmospheric neutrinos (gray area at the bottom of the plot). This will make detecting a dark matter signal much more challenging, and will certainly slow down the progress for WIMP masses larger than ~5 GeV. For lower masses, the distance to the floor remains large. Xenon detectors lose their steam there, and another technology is needed, like germanium detectors of CDMS and CDEX, or CaWO4 crystals of CRESST. Also on this front important progress is expected soon.
What does the theory say about when we will find dark matter? It is perfectly viable that the discovery is waiting for us just behind the corner in the remaining space above the neutrino floor, but currently there's no strong theoretical hints in favor of that possibility. Usually, dark matter experiments advertise that they're just beginning to explore the interesting parameter space predicted by theory models.This is not quite correct. If the WIMP were true to its name, that is to say if it was interacting via the weak force (meaning, coupled to Z with order 1 strength), it would have order 10 fb scattering cross section on neutrons. Unfortunately, that natural possibility was excluded in the previous century. Years of experimental progress have shown that the WIMPs, if they exist, must be interacting super-weakly with matter. For example, for a 100 GeV fermionic dark matter with the vector coupling g to the Z boson, the current limits imply g ≲ 10^-4. The coupling can be larger if the Higgs boson is the mediator of interactions between the dark and visible worlds, as the Higgs already couples very weakly to nucleons. This construction is, arguably, the most plausible one currently probed by direct detection experiments. For a scalar dark matter particle X with mass 0.1-1 TeV coupled to the Higgs via the interaction λ v h |X|^2 the experiments are currently probing the coupling λ in the 0.01-1 ballpark. In general, there's no theoretical lower limit on the dark matter coupling to nucleons. Nevertheless, the weak coupling implied by direct detection limits creates some tension for the thermal production paradigm, which requires a weak (that is order picobarn) annihilation cross section for dark matter particles. This tension needs to be resolved by more complicated model building, e.g. by arranging for resonant annihilation or for co-annihilation.
What does the theory say about when we will find dark matter? It is perfectly viable that the discovery is waiting for us just behind the corner in the remaining space above the neutrino floor, but currently there's no strong theoretical hints in favor of that possibility. Usually, dark matter experiments advertise that they're just beginning to explore the interesting parameter space predicted by theory models.This is not quite correct. If the WIMP were true to its name, that is to say if it was interacting via the weak force (meaning, coupled to Z with order 1 strength), it would have order 10 fb scattering cross section on neutrons. Unfortunately, that natural possibility was excluded in the previous century. Years of experimental progress have shown that the WIMPs, if they exist, must be interacting super-weakly with matter. For example, for a 100 GeV fermionic dark matter with the vector coupling g to the Z boson, the current limits imply g ≲ 10^-4. The coupling can be larger if the Higgs boson is the mediator of interactions between the dark and visible worlds, as the Higgs already couples very weakly to nucleons. This construction is, arguably, the most plausible one currently probed by direct detection experiments. For a scalar dark matter particle X with mass 0.1-1 TeV coupled to the Higgs via the interaction λ v h |X|^2 the experiments are currently probing the coupling λ in the 0.01-1 ballpark. In general, there's no theoretical lower limit on the dark matter coupling to nucleons. Nevertheless, the weak coupling implied by direct detection limits creates some tension for the thermal production paradigm, which requires a weak (that is order picobarn) annihilation cross section for dark matter particles. This tension needs to be resolved by more complicated model building, e.g. by arranging for resonant annihilation or for co-annihilation.
Thursday, 1 September 2016
Next stop: tth
This was a summer of brutally dashed hopes for a quick discovery of many fundamental particles that we were imagining. For the time being we need to focus on the ones that actually exist, such as the Higgs boson. In the Run-1 of the LHC, the Higgs existence and identity were firmly established, while its mass and basic properties were measured. The signal was observed with large significance in 4 different decay channels (γγ, ZZ*, WW*, ττ), and two different production modes (gluon fusion, vector-boson fusion) have been isolated. Still, there remains many fine details to sort out. The realistic goal for the Run-2 is to pinpoint the following Higgs processes:
It seems that the last objective may be achieved quicker than expected. The tth production process is very interesting theoretically, because its rate is proportional to the (square of the) Yukawa coupling between the Higgs boson and top quarks. Within the Standard Model, the value of this parameter is known to a good accuracy, as it is related to the mass of the top quark. But that relation can be disrupted in models beyond the Standard Model, with the two-Higgs-doublet model and composite/little Higgs models serving as prominent examples. Thus, measurements of the top Yukawa coupling will provide a crucial piece of information about new physics.
In the Run-1, a not-so-small signal of tth production was observed by the ATLAS and CMS collaborations in several channels. Assuming that Higgs decays have the same branching fraction as in the Standard Model, the tth signal strength normalized to the Standard Model prediction was estimated as
At face value, a strong evidence for the tth production was obtained in the Run-1! This fact was not advertised by the collaborations because the measurement is not clean due to a large number of top quarks produced by other processes at the LHC. The tth signal is thus a small blip on top of a huge background, and it's not excluded that some unaccounted for systematic errors are skewing the measurements. The collaborations thus preferred to play it safe, and wait for more data to be collected.
In the Run-2 with 13 TeV collisions the tth production cross section is 4-times larger than in the Run-1, therefore the new data are coming at a fast pace. Both ATLAS and CMS presented their first Higgs results in early August, and the tth signal is only getting stronger. ATLAS showed their measurements in the γγ, WW/ττ, and bb final states of Higgs decay, as well as their combination:
Most channels display a signal-like excess, which is reflected by the Run-2 combination being 2.5 sigma away from zero. A similar picture is emerging in CMS, with 2-sigma signals in the γγ and WW/ττ channels. Naively combining all Run-1 and and Run-2 results one then finds
At face value, this is a discovery! Of course, this number should be treated with some caution because, due to large systematic errors, a naive Gaussian combination may not represent very well the true likelihood. Nevertheless, it indicates that, if all goes well, the discovery of the tth production mode should be officially announced in the near future, maybe even this year.
- (h→bb): Decays to b-quarks.
- (Vh): Associated production with W or Z boson.
- (tth): Associated production with top quarks.
It seems that the last objective may be achieved quicker than expected. The tth production process is very interesting theoretically, because its rate is proportional to the (square of the) Yukawa coupling between the Higgs boson and top quarks. Within the Standard Model, the value of this parameter is known to a good accuracy, as it is related to the mass of the top quark. But that relation can be disrupted in models beyond the Standard Model, with the two-Higgs-doublet model and composite/little Higgs models serving as prominent examples. Thus, measurements of the top Yukawa coupling will provide a crucial piece of information about new physics.
In the Run-1, a not-so-small signal of tth production was observed by the ATLAS and CMS collaborations in several channels. Assuming that Higgs decays have the same branching fraction as in the Standard Model, the tth signal strength normalized to the Standard Model prediction was estimated as
At face value, a strong evidence for the tth production was obtained in the Run-1! This fact was not advertised by the collaborations because the measurement is not clean due to a large number of top quarks produced by other processes at the LHC. The tth signal is thus a small blip on top of a huge background, and it's not excluded that some unaccounted for systematic errors are skewing the measurements. The collaborations thus preferred to play it safe, and wait for more data to be collected.
In the Run-2 with 13 TeV collisions the tth production cross section is 4-times larger than in the Run-1, therefore the new data are coming at a fast pace. Both ATLAS and CMS presented their first Higgs results in early August, and the tth signal is only getting stronger. ATLAS showed their measurements in the γγ, WW/ττ, and bb final states of Higgs decay, as well as their combination:
Most channels display a signal-like excess, which is reflected by the Run-2 combination being 2.5 sigma away from zero. A similar picture is emerging in CMS, with 2-sigma signals in the γγ and WW/ττ channels. Naively combining all Run-1 and and Run-2 results one then finds
At face value, this is a discovery! Of course, this number should be treated with some caution because, due to large systematic errors, a naive Gaussian combination may not represent very well the true likelihood. Nevertheless, it indicates that, if all goes well, the discovery of the tth production mode should be officially announced in the near future, maybe even this year.
Should we get excited that the measured tth rate is significantly larger than Standard Model one? Assuming that the current central value remains, it would mean that the top Yukawa coupling is 40% larger than that predicted by the Standard Model. This is not impossible, but very unlikely in practice. The reason is that the top Yukawa coupling also controls the gluon fusion - the main Higgs production channel at the LHC - whose rate is measured to be in perfect agreement with the Standard Model. Therefore, a realistic model that explains the large tth rate would also have to provide negative contributions to the gluon fusion amplitude, so as to cancel the effect of the large top Yukawa coupling. It is possible to engineer such a cancellation in concrete models, but I'm not aware of any construction where this conspiracy arises in a natural way. Most likely, the currently observed excess is a statistical fluctuation (possibly in combination with underestimated theoretical and/or experimental errors), and the central value will drift toward μ=1 as more data is collected.
Friday, 29 July 2016
After the hangover
The loss of the 750 GeV diphoton resonance is a big blow to the particle physics community. We are currently going through the 5 stages of grief, everyone at their own pace, as can be seen e.g. in this comments section. Nevertheless, it may already be a good moment to revisit the story one last time, so as to understand what went wrong.
In the recent years, physics beyond the Standard Model has seen 2 other flops of comparable impact: the faster-than-light neutrinos in OPERA, and the CMB tensor fluctuations in BICEP. Much as the diphoton signal, both of the above triggered a binge of theoretical explanations, followed by a massive hangover. There was one big difference, however: the OPERA and BICEP signals were due to embarrassing errors on the experiments' side. This doesn't seem to be the case for the diphoton bump at the LHC. Some may wonder whether the Standard Model background may have been slightly underestimated, or whether one experiment may have been biased by the result of the other... But, most likely, the 750 GeV bump was just due to a random fluctuation of the background at this particular energy. Regrettably, the resulting mess cannot be blamed on experimentalists, who were in fact downplaying the anomaly in their official communications. This time it's the theorists who have some explaining to do.
Why did theorists write 500 papers about a statistical fluctuation? One reason is that it didn't look like one at first sight. Back in December 2015, the local significance of the diphoton bump in ATLAS run-2 data was 3.9 sigma, which means the probability of such a fluctuation was 1 in 10000. Combining available run-1 and run-2 diphoton data in ATLAS and CMS, the local significance was increased to 4.4 sigma. All in all, it was a very unusual excess, a 1-in-100000 occurrence! Of course, this number should be interpreted with care. The point is that the LHC experiments perform gazillion different measurements, thus they are bound to observe seemingly unlikely outcomes in a small fraction of them. This can be partly taken into account by calculating the global significance, which is the probability of finding a background fluctuation of the observed size anywhere in the diphoton spectrum. The global significance of the 750 GeV bump quoted by ATLAS was only about two sigma, the fact strongly emphasized by the collaboration. However, that number can be misleading too. One problem with the global significance is that, unlike for the local one, it cannot be easily combined in the presence of separate measurements of the same observable. For the diphoton final state we have ATLAS and CMS measurements in run-1 and run-2, thus 4 independent datasets, and their robust concordance was crucial in creating the excitement. Note also that what is really relevant here is the probability of a fluctuation of a given size in any of the LHC measurement, and that is not captured by the global significance. For these reasons, I find it more transparent work with the local significance, remembering that it should not be interpreted as the probability that the Standard Model is incorrect. By these standards, a 4.4 sigma fluctuation in a combined ATLAS and CMS dataset is still a very significant effect which deserves a special attention. What we learned the hard way is that such large fluctuations do happen at the LHC... This lesson will certainly be taken into account next time we encounter a significant anomaly.
Another reason why the 750 GeV bump was exciting is that the measurement is rather straightforward. Indeed, at the LHC we often see anomalies in complicated final states or poorly controlled differential distributions, and we treat those with much skepticism. But a resonance in the diphoton spectrum is almost the simplest and cleanest observable that one can imagine (only a dilepton or 4-lepton resonance would be cleaner). We already successfully discovered one particle this way - that's how the Higgs boson first showed up in 2011. Thus, we have good reasons to believe that the collaborations control this measurement very well.
Finally, the diphoton bump was so attractive because theoretical explanations were plausible. It was trivial to write down a model fitting the data, there was no need to stretch or fine-tune the parameters, and it was quite natural that the particle first showed in as a diphoton resonance and not in other final states. This is in stark contrast to other recent anomalies which typically require a great deal of gymnastics to fit into a consistent picture. The only thing to give you a pause was the tension with the LHC run-1 diphoton data, but even that became mild after the Moriond update this year.
So we got a huge signal of a new particle in a clean channel with plausible theoretic models to explain it... that was a really bad luck. My conclusion may not be shared by everyone but I don't think that the theory community committed major missteps in this case. Given that for 30 years we have been looking for a clue about the fundamental theory beyond the Standard Model, our reaction was not disproportionate once a seemingly reliable one had arrived. Excitement is an inherent part of physics research. And so is disappointment, apparently.
There remains a question whether we really needed 500 papers... Well, of course not: many of them fill an important gap. Yet many are an interesting read, and I personally learned a lot of exciting physics from them. Actually, I suspect that the fraction of useless papers among the 500 is lower than for regular daily topics. On a more sociological side, these papers exacerbate the problem with our citation culture (mass-grave references), which undermines the citation count as a means to evaluate the research impact. But that is a wider issue which I don't know how to address at the moment.
Time to move on. The ICHEP conference is coming next week, with loads of brand new results based on up to 16 inverse femtobarns of 13 TeV LHC data. Although the rumor is that there is no new exciting anomaly at this point, it will be interesting to see how much room is left for new physics. The hope lingers on, at least until the end of this year.
In the comments section you're welcome to lash out on the entire BSM community - we made a wrong call so we deserve it. Please, however, avoid personal attacks (unless on me). Alternatively, you can also give us a hug :)
In the recent years, physics beyond the Standard Model has seen 2 other flops of comparable impact: the faster-than-light neutrinos in OPERA, and the CMB tensor fluctuations in BICEP. Much as the diphoton signal, both of the above triggered a binge of theoretical explanations, followed by a massive hangover. There was one big difference, however: the OPERA and BICEP signals were due to embarrassing errors on the experiments' side. This doesn't seem to be the case for the diphoton bump at the LHC. Some may wonder whether the Standard Model background may have been slightly underestimated, or whether one experiment may have been biased by the result of the other... But, most likely, the 750 GeV bump was just due to a random fluctuation of the background at this particular energy. Regrettably, the resulting mess cannot be blamed on experimentalists, who were in fact downplaying the anomaly in their official communications. This time it's the theorists who have some explaining to do.
Why did theorists write 500 papers about a statistical fluctuation? One reason is that it didn't look like one at first sight. Back in December 2015, the local significance of the diphoton bump in ATLAS run-2 data was 3.9 sigma, which means the probability of such a fluctuation was 1 in 10000. Combining available run-1 and run-2 diphoton data in ATLAS and CMS, the local significance was increased to 4.4 sigma. All in all, it was a very unusual excess, a 1-in-100000 occurrence! Of course, this number should be interpreted with care. The point is that the LHC experiments perform gazillion different measurements, thus they are bound to observe seemingly unlikely outcomes in a small fraction of them. This can be partly taken into account by calculating the global significance, which is the probability of finding a background fluctuation of the observed size anywhere in the diphoton spectrum. The global significance of the 750 GeV bump quoted by ATLAS was only about two sigma, the fact strongly emphasized by the collaboration. However, that number can be misleading too. One problem with the global significance is that, unlike for the local one, it cannot be easily combined in the presence of separate measurements of the same observable. For the diphoton final state we have ATLAS and CMS measurements in run-1 and run-2, thus 4 independent datasets, and their robust concordance was crucial in creating the excitement. Note also that what is really relevant here is the probability of a fluctuation of a given size in any of the LHC measurement, and that is not captured by the global significance. For these reasons, I find it more transparent work with the local significance, remembering that it should not be interpreted as the probability that the Standard Model is incorrect. By these standards, a 4.4 sigma fluctuation in a combined ATLAS and CMS dataset is still a very significant effect which deserves a special attention. What we learned the hard way is that such large fluctuations do happen at the LHC... This lesson will certainly be taken into account next time we encounter a significant anomaly.
Another reason why the 750 GeV bump was exciting is that the measurement is rather straightforward. Indeed, at the LHC we often see anomalies in complicated final states or poorly controlled differential distributions, and we treat those with much skepticism. But a resonance in the diphoton spectrum is almost the simplest and cleanest observable that one can imagine (only a dilepton or 4-lepton resonance would be cleaner). We already successfully discovered one particle this way - that's how the Higgs boson first showed up in 2011. Thus, we have good reasons to believe that the collaborations control this measurement very well.
Finally, the diphoton bump was so attractive because theoretical explanations were plausible. It was trivial to write down a model fitting the data, there was no need to stretch or fine-tune the parameters, and it was quite natural that the particle first showed in as a diphoton resonance and not in other final states. This is in stark contrast to other recent anomalies which typically require a great deal of gymnastics to fit into a consistent picture. The only thing to give you a pause was the tension with the LHC run-1 diphoton data, but even that became mild after the Moriond update this year.
So we got a huge signal of a new particle in a clean channel with plausible theoretic models to explain it... that was a really bad luck. My conclusion may not be shared by everyone but I don't think that the theory community committed major missteps in this case. Given that for 30 years we have been looking for a clue about the fundamental theory beyond the Standard Model, our reaction was not disproportionate once a seemingly reliable one had arrived. Excitement is an inherent part of physics research. And so is disappointment, apparently.
There remains a question whether we really needed 500 papers... Well, of course not: many of them fill an important gap. Yet many are an interesting read, and I personally learned a lot of exciting physics from them. Actually, I suspect that the fraction of useless papers among the 500 is lower than for regular daily topics. On a more sociological side, these papers exacerbate the problem with our citation culture (mass-grave references), which undermines the citation count as a means to evaluate the research impact. But that is a wider issue which I don't know how to address at the moment.
Time to move on. The ICHEP conference is coming next week, with loads of brand new results based on up to 16 inverse femtobarns of 13 TeV LHC data. Although the rumor is that there is no new exciting anomaly at this point, it will be interesting to see how much room is left for new physics. The hope lingers on, at least until the end of this year.
In the comments section you're welcome to lash out on the entire BSM community - we made a wrong call so we deserve it. Please, however, avoid personal attacks (unless on me). Alternatively, you can also give us a hug :)
Saturday, 18 June 2016
Game of Thrones: 750 GeV edition
The 750 GeV diphoton resonance has made a big impact on theoretical particle physics. The number of papers on the topic is already legendary, and they keep coming at the rate of order 10 per week. Given that the Backović model is falsified, there's no longer a theoretical upper limit. Does this mean we are not dealing with the classical ambulance chasing scenario? The answer may be known in the next days.
So who's leading this race? What kind of question is that, you may shout, of course it's Strumia! And you would be wrong, independently of the metric. For this contest, I will consider two different metrics: the King Beyond the Wall that counts the number of papers on the topic, and the Iron Throne that counts how many times these papers have been cited.
In the first category, the contest is much more fierce than one might expect: it takes 8 papers to be the leader, and 7 papers may not be enough to even get on the podium! Among the 3 authors with 7 papers the final classification is decided bytrial by combat the citation count. The result is (drums):
Citations, tja... Social dynamics of our community encourages referencing all previous work on the topic, rather than just the relevant ones, which in this particular case triggered a period of inflation. One day soon citation numbers will mean as much as authorship in experimental particle physics. But for now the size of the h-factor is still an important measure of virility for theorists. If the citation count rather the number of papers is the main criterion, the iron throne is taken by a Targaryen contender (trumpets):
This explains why the resonance is usually denoted by the letter S.
Update 09.08.2016. Now that the 750 GeV excess is officially dead, one can give the final classification. The race for the iron throne was tight till the end, but there could only be one winner:
As you can see, in this race the long-term strategy and persistence proved to be more important than pulling off a few early victories. In the other category there have also been changes in the final stretch: the winner added 3 papers in the period between the un-official and official announcement of the demise of the 750 GeV resonance. The final standing are:
Congratulations for all the winners. For all the rest, wish you more luck and persistence in the next edition, provided it will take place.
So who's leading this race? What kind of question is that, you may shout, of course it's Strumia! And you would be wrong, independently of the metric. For this contest, I will consider two different metrics: the King Beyond the Wall that counts the number of papers on the topic, and the Iron Throne that counts how many times these papers have been cited.
In the first category, the contest is much more fierce than one might expect: it takes 8 papers to be the leader, and 7 papers may not be enough to even get on the podium! Among the 3 authors with 7 papers the final classification is decided by
Citations, tja... Social dynamics of our community encourages referencing all previous work on the topic, rather than just the relevant ones, which in this particular case triggered a period of inflation. One day soon citation numbers will mean as much as authorship in experimental particle physics. But for now the size of the h-factor is still an important measure of virility for theorists. If the citation count rather the number of papers is the main criterion, the iron throne is taken by a Targaryen contender (trumpets):
This explains why the resonance is usually denoted by the letter S.
Update 09.08.2016. Now that the 750 GeV excess is officially dead, one can give the final classification. The race for the iron throne was tight till the end, but there could only be one winner:
As you can see, in this race the long-term strategy and persistence proved to be more important than pulling off a few early victories. In the other category there have also been changes in the final stretch: the winner added 3 papers in the period between the un-official and official announcement of the demise of the 750 GeV resonance. The final standing are:
Congratulations for all the winners. For all the rest, wish you more luck and persistence in the next edition, provided it will take place.
Friday, 10 June 2016
Black hole dark matter
The idea that dark matter is made of primordial black holes is very old but has always been in the backwater of particle physics. The WIMP or asymmetric dark matter paradigms are preferred for several reasons such as calculability, observational opportunities, and a more direct connection to cherished theories beyond the Standard Model. But in the recent months there has been more interest, triggered in part by the LIGO observations of black hole binary mergers. In the first observed event, the mass of each of the black holes was estimated at around 30 solar masses. While such a system may well be of boring astrophysical origin, it is somewhat unexpected because typical black holes we come across in everyday life are either a bit smaller (around one solar mass) or much larger (supermassive black hole in the galactic center). On the other hand, if the dark matter halo were made of black holes, scattering processes would sometimes create short-lived binary systems. Assuming a significant fraction of dark matter in the universe is made of primordial black holes, this paper estimated that the rate of merger processes is in the right ballpark to explain the LIGO events.
Primordial black holes can form from large density fluctuations in the early universe. On the largest observable scales the universe is incredibly homogenous, as witnessed by the uniform temperature of the Cosmic Microwave Background over the entire sky. However on smaller scales the primordial inhomogeneities could be much larger without contradicting observations. From the fundamental point of view, large density fluctuations may be generated by several distinct mechanism, for example during the final stages of inflation in the waterfall phase in the hybrid inflation scenario. While it is rather generic that this or similar process may seed black hole formation in the radiation-dominated era, severe fine-tuning is required to produce the right amount of black holes and ensure that the resulting universe resembles the one we know.
All in all, it's fair to say that the scenario where all or a significant fraction of dark matter is made of primordial black holes is not completely absurd. Moreover, one typically expects the masses to span a fairly narrow range. Could it be that the LIGO events is the first indirect detection of dark matter made of O(10)-solar-mass black holes? One problem with this scenario is that it is excluded, as can be seen in the plot. Black holes sloshing through the early dense universe accrete the surrounding matter and produce X-rays which could ionize atoms and disrupt the Cosmic Microwave Background. In the 10-100 solar mass range relevant for LIGO this effect currently gives the strongest constraint on primordial black holes: according to this paper they are allowed to constitute not more than 0.01% of the total dark matter abundance. In astrophysics, however, not only signals but also constraints should be taken with a grain of salt. In this particular case, the word in town is that the derivation contains a numerical error and that the corrected limit is 2 orders of magnitude less severe than what's shown in the plot. Moreover, this limit strongly depends on the model of accretion, and more favorable assumptions may buy another order of magnitude or two. All in all, the possibility of dark matter made of primordial black hole in the 10-100 solar mass range should not be completely discarded yet. Another possibility is that black holes make only a small fraction of dark matter, but the merger rate is faster, closer to the estimate of this paper.
Assuming this is the true scenario, how will we know? Direct detection of black holes is discouraged, while the usual cosmic ray signals are absent. Instead, in most of the mass range, the best probes of primordial black holes are various lensing observations. For LIGO black holes, progress may be made via observations of fast radio bursts. These are strong radio signals of (probably) extragalactic origin and millisecond duration. The radio signal passing near a O(10)-solar-mass black hole could be strongly lensed, leading to repeated signals detected on Earth with an observable time delay. In the near future we should observe hundreds of such repeated bursts, or obtain new strong constraints on primordial black holes in the interesting mass ballpark. Gravitational wave astronomy may offer another way. When more statistics is accumulated, we will be able to say something about the spatial distributions of the merger events. Primordial black holes should be distributed like dark matter halos, whereas astrophysical black holes should be correlated with luminous galaxies. Also, the typical eccentricity of the astrophysical black hole binaries should be different. With some luck, the primordial black hole dark matter scenario may be vindicated or robustly excluded in the near future.
See also these slides for more details.
Primordial black holes can form from large density fluctuations in the early universe. On the largest observable scales the universe is incredibly homogenous, as witnessed by the uniform temperature of the Cosmic Microwave Background over the entire sky. However on smaller scales the primordial inhomogeneities could be much larger without contradicting observations. From the fundamental point of view, large density fluctuations may be generated by several distinct mechanism, for example during the final stages of inflation in the waterfall phase in the hybrid inflation scenario. While it is rather generic that this or similar process may seed black hole formation in the radiation-dominated era, severe fine-tuning is required to produce the right amount of black holes and ensure that the resulting universe resembles the one we know.
All in all, it's fair to say that the scenario where all or a significant fraction of dark matter is made of primordial black holes is not completely absurd. Moreover, one typically expects the masses to span a fairly narrow range. Could it be that the LIGO events is the first indirect detection of dark matter made of O(10)-solar-mass black holes? One problem with this scenario is that it is excluded, as can be seen in the plot. Black holes sloshing through the early dense universe accrete the surrounding matter and produce X-rays which could ionize atoms and disrupt the Cosmic Microwave Background. In the 10-100 solar mass range relevant for LIGO this effect currently gives the strongest constraint on primordial black holes: according to this paper they are allowed to constitute not more than 0.01% of the total dark matter abundance. In astrophysics, however, not only signals but also constraints should be taken with a grain of salt. In this particular case, the word in town is that the derivation contains a numerical error and that the corrected limit is 2 orders of magnitude less severe than what's shown in the plot. Moreover, this limit strongly depends on the model of accretion, and more favorable assumptions may buy another order of magnitude or two. All in all, the possibility of dark matter made of primordial black hole in the 10-100 solar mass range should not be completely discarded yet. Another possibility is that black holes make only a small fraction of dark matter, but the merger rate is faster, closer to the estimate of this paper.
Assuming this is the true scenario, how will we know? Direct detection of black holes is discouraged, while the usual cosmic ray signals are absent. Instead, in most of the mass range, the best probes of primordial black holes are various lensing observations. For LIGO black holes, progress may be made via observations of fast radio bursts. These are strong radio signals of (probably) extragalactic origin and millisecond duration. The radio signal passing near a O(10)-solar-mass black hole could be strongly lensed, leading to repeated signals detected on Earth with an observable time delay. In the near future we should observe hundreds of such repeated bursts, or obtain new strong constraints on primordial black holes in the interesting mass ballpark. Gravitational wave astronomy may offer another way. When more statistics is accumulated, we will be able to say something about the spatial distributions of the merger events. Primordial black holes should be distributed like dark matter halos, whereas astrophysical black holes should be correlated with luminous galaxies. Also, the typical eccentricity of the astrophysical black hole binaries should be different. With some luck, the primordial black hole dark matter scenario may be vindicated or robustly excluded in the near future.
See also these slides for more details.
Friday, 27 May 2016
CMS: Higgs to mu tau is going away
One interesting anomaly in the LHC run-1 was a hint of Higgs boson decays to a muon and a tau lepton. Such process is forbidden in the Standard Model by the conservation of muon and tau lepton numbers. Neutrino masses violate individual lepton numbers, but their effect is far too small to affect the Higgs decays in practice. On the other hand, new particles do not have to respect global symmetries of the Standard Model, and they could induce lepton flavor violating Higgs decays at an observable level. Surprisingly, CMS found a small excess in the Higgs to tau mu search in their 8 TeV data, with the measured branching fraction Br(h→τμ)=(0.84±0.37)%. The analogous measurement in ATLAS is 1 sigma above the background-only hypothesis, Br(h→τμ)=(0.53±0.51)%. Together this merely corresponds to a 2.5 sigma excess, so it's not too exciting in itself. However, taken together with the B-meson anomalies in LHCb, it has raised hopes for lepton flavor violating new physics just around the corner. For this reason, the CMS excess inspired a few dozen of theory papers, with Z' bosons, leptoquarks, and additional Higgs doublets pointed out as possible culprits.
Alas, the wind is changing. CMS made a search for h→τμ in their small stash of 13 TeV data collected in 2015. This time they were hit by a negative background fluctuation, and they found Br(h→τμ)=(-0.76±0.81)%. The accuracy of the new measurement is worse than that in run-1, but nevertheless it lowers the combined significance of the excess below 2 sigma. Statistically speaking, the situation hasn't changed much, but psychologically this is very discouraging. A true signal is expected to grow when more data is added, and when it's the other way around it's usually a sign that we are dealing with a statistical fluctuation...
So, if you have a cool model explaining the h→τμ excess be sure to post it on arXiv before more run-2 data is analyzed ;)
Alas, the wind is changing. CMS made a search for h→τμ in their small stash of 13 TeV data collected in 2015. This time they were hit by a negative background fluctuation, and they found Br(h→τμ)=(-0.76±0.81)%. The accuracy of the new measurement is worse than that in run-1, but nevertheless it lowers the combined significance of the excess below 2 sigma. Statistically speaking, the situation hasn't changed much, but psychologically this is very discouraging. A true signal is expected to grow when more data is added, and when it's the other way around it's usually a sign that we are dealing with a statistical fluctuation...
So, if you have a cool model explaining the h→τμ excess be sure to post it on arXiv before more run-2 data is analyzed ;)
Saturday, 14 May 2016
Plot for Weekend: new limits on neutrino masses
This weekend's plot shows the new limits on neutrino masses from the KamLAND-Zen experiment:
KamLAND-Zen is a group of buddhist monks studying a balloon filled with the xenon isotope Xe136. That isotope has a very long lifetime, of order 10^21 years, and undergoes the lepton-number-conserving double beta decay Xe136 → Ba136 2e- 2νbar. What the monks hope to observe is the lepton violating neutrinoless double beta decay Xe136 → Ba136+2e, which would show as a peak in the invariant mass distribution of the electron pairs near 2.5 MeV. No such signal has been observed, which sets the limit on the half-life for this decay at T>1.1*10^26 years.
The neutrinoless decay is predicted to occur if neutrino masses are of Majorana type, and the rate can be characterized by the effective mass Majorana mββ (y-axis in the plot). That parameter is a function of the masses and mixing angles of the neutrinos. In particular it depends on the mass of the lightest neutrino (x-axis in the plot) which is currently unknown. Neutrino oscillations experiments have precisely measured the mass^2 differences between neutrinos, which are roughly (0.05 eV)^2 and (0.01 eV)^2. But oscillations are not sensitive to the absolute mass scale; in particular, the lightest neutrino may well be massless for all we know. If the heaviest neutrino has a small electron flavor component, then we expect that the mββ parameter is below 0.01 eV. This so-called normal hierarchy case is shown as the red region in the plot, and is clearly out of experimental reach at the moment. On the other hand, in the inverted hierarchy scenario (green region in the plot), it is the two heaviest neutrinos that have a significant electron component. In this case, the effective Majorana mass mββ is around 0.05 eV. Finally, there is also the degenerate scenario (funnel region in the plot) where all 3 neutrinos have very similar masses with small splittings, however this scenario is now strongly disfavored by cosmological limits on the sum of the neutrino masses (e.g. the Planck limit Σmν < 0.16 eV).
As can be seen in the plot, the results from KamLAND-Zen, when translated into limits on the effective Majorana mass, almost touch the inverted hierarchy region. The strength of this limit depends on some poorly known nuclear matrix elements (hence the width of the blue band). But even in the least favorable scenario future, more sensitive experiments should be able to probe that region. Thus, there is a hope that within the next few years we may prove the Majorana nature of neutrinos, or at least disfavor the inverted hierarchy scenario.
KamLAND-Zen is a group of buddhist monks studying a balloon filled with the xenon isotope Xe136. That isotope has a very long lifetime, of order 10^21 years, and undergoes the lepton-number-conserving double beta decay Xe136 → Ba136 2e- 2νbar. What the monks hope to observe is the lepton violating neutrinoless double beta decay Xe136 → Ba136+2e, which would show as a peak in the invariant mass distribution of the electron pairs near 2.5 MeV. No such signal has been observed, which sets the limit on the half-life for this decay at T>1.1*10^26 years.
The neutrinoless decay is predicted to occur if neutrino masses are of Majorana type, and the rate can be characterized by the effective mass Majorana mββ (y-axis in the plot). That parameter is a function of the masses and mixing angles of the neutrinos. In particular it depends on the mass of the lightest neutrino (x-axis in the plot) which is currently unknown. Neutrino oscillations experiments have precisely measured the mass^2 differences between neutrinos, which are roughly (0.05 eV)^2 and (0.01 eV)^2. But oscillations are not sensitive to the absolute mass scale; in particular, the lightest neutrino may well be massless for all we know. If the heaviest neutrino has a small electron flavor component, then we expect that the mββ parameter is below 0.01 eV. This so-called normal hierarchy case is shown as the red region in the plot, and is clearly out of experimental reach at the moment. On the other hand, in the inverted hierarchy scenario (green region in the plot), it is the two heaviest neutrinos that have a significant electron component. In this case, the effective Majorana mass mββ is around 0.05 eV. Finally, there is also the degenerate scenario (funnel region in the plot) where all 3 neutrinos have very similar masses with small splittings, however this scenario is now strongly disfavored by cosmological limits on the sum of the neutrino masses (e.g. the Planck limit Σmν < 0.16 eV).
As can be seen in the plot, the results from KamLAND-Zen, when translated into limits on the effective Majorana mass, almost touch the inverted hierarchy region. The strength of this limit depends on some poorly known nuclear matrix elements (hence the width of the blue band). But even in the least favorable scenario future, more sensitive experiments should be able to probe that region. Thus, there is a hope that within the next few years we may prove the Majorana nature of neutrinos, or at least disfavor the inverted hierarchy scenario.
Monday, 9 May 2016
Off we go
The LHC is back in action since last weekend, again colliding protons with 13 TeV energy. The weasels' conspiracy was foiled, and the perpetrators were exemplarily electrocuted. PhD students have been deployed around the LHC perimeter to counter any further sabotage attempts (stoats are known to have been in league with weasels in the past). The period that begins now may prove to be the most exciting time for particle physics in this century. Or the most disappointing.
The beam intensity is still a factor of 10 below the nominal one, so the harvest of last weekend is meager 40 inverse picobarns. But the number of proton bunches in the beam is quickly increasing, and once it reaches O(2000), the data will stream at a rate of a femtobarn per week or more. For the nearest future, the plan is to have a few inverse femtobarns on tape by mid-July, which would roughly double the current 13 TeV dataset. The first analyses of this chunk of data should be presented around the time of the ICHEP conference in early August. At that point we will know whether the 750 GeV particle is real. Celebrations will begin if the significance of the diphoton peak increases after adding the new data, even if the statistics is not enough to officially announce a discovery. In the best of all worlds, we may also get a hint of a matching 750 GeV peak in another decay channel (ZZ, Z-photon, dilepton, t-tbar,...) which would help focus our model building. On the other hand, if the significance of the diphoton peak drops in August, there will be a massive hangover...
By the end of October, when the 2016 proton collisions are scheduled to end, the LHC hopes to collect some 20 inverse femtobarns of data. This should already give us a rough feeling of new physics within the reach of the LHC. If a hint of another resonance is seen at that point, one will surely be able to confirm or refute it with the data collected in the following years. If nothing is seen... then you should start telling yourself that condensed matter physics is also sort of fundamental, or that systematic uncertainties in astrophysics are not so bad after all... In any scenario, by December, when first analyses of the full 2016 dataset will be released, we will know infinitely more than we do today.
So fasten your seat belts and get ready for a (hopefully) bumpy ride. Serious rumors should start showing up on blogs and twitter starting from July.
The beam intensity is still a factor of 10 below the nominal one, so the harvest of last weekend is meager 40 inverse picobarns. But the number of proton bunches in the beam is quickly increasing, and once it reaches O(2000), the data will stream at a rate of a femtobarn per week or more. For the nearest future, the plan is to have a few inverse femtobarns on tape by mid-July, which would roughly double the current 13 TeV dataset. The first analyses of this chunk of data should be presented around the time of the ICHEP conference in early August. At that point we will know whether the 750 GeV particle is real. Celebrations will begin if the significance of the diphoton peak increases after adding the new data, even if the statistics is not enough to officially announce a discovery. In the best of all worlds, we may also get a hint of a matching 750 GeV peak in another decay channel (ZZ, Z-photon, dilepton, t-tbar,...) which would help focus our model building. On the other hand, if the significance of the diphoton peak drops in August, there will be a massive hangover...
By the end of October, when the 2016 proton collisions are scheduled to end, the LHC hopes to collect some 20 inverse femtobarns of data. This should already give us a rough feeling of new physics within the reach of the LHC. If a hint of another resonance is seen at that point, one will surely be able to confirm or refute it with the data collected in the following years. If nothing is seen... then you should start telling yourself that condensed matter physics is also sort of fundamental, or that systematic uncertainties in astrophysics are not so bad after all... In any scenario, by December, when first analyses of the full 2016 dataset will be released, we will know infinitely more than we do today.
So fasten your seat belts and get ready for a (hopefully) bumpy ride. Serious rumors should start showing up on blogs and twitter starting from July.
Friday, 1 April 2016
April Fools' 16: Was LIGO a hack?
This post is an April Fools' joke. LIGO's gravitational waves are for real. At least I hope so ;)
We have had recently a few scientific embarrassments, where a big discovery announced with great fanfares was subsequently overturned by new evidence. We still remember OPERA's faster than light neutrinos which turned out to be a loose cable, or BICEP's gravitational waves from inflation, which turned out to be galactic dust emission... It seems that another such embarrassment is coming our way: the recent LIGO's discovery of gravitational waves emitted in a black hole merger may share a similar fate. There are reasons to believe that the experiment was hacked, and the signal was injected by a prankster.
From the beginning, one reason to be skeptical about LIGO's discovery was that the signal seemed too beautiful to be true. Indeed, the experimental curve looked as if taken out of a textbook on general relativity, with a clearly visible chirp signal from the inspiral phase, followed by a ringdown signal when the merged black hole relaxes to the Kerr state. The reason may be that it *is* taken out of a textbook. This is at least what is strongly suggested by recent developments.
On EvilZone, a well-known hacker's forum, a hacker using a nickname Madhatter was boasting that it was possible to tamper with scientific instruments, including the LHC, the Fermi satellite, and the LIGO interferometer. When challenged, he or she uploaded a piece of code that allows one to access LIGO computers. Apparently, the hacker took advantage the same backdoor that allows the selected members of the LIGO team to inject a fake signal in order to test the analysis chain. This was brought to attention of the collaboration members, who decided to test the code. To everyone's bewilderment, the effect was to reproduce exactly the same signal in the LIGO apparatus as the one observed in September last year!
Even though the traces of a hack cannot be discovered, there is little doubt now that there was a foul play involved. It is not clear what was the motif of the hacker: was it just a prank, or maybe an elaborate plan to discredit the scientists. What is even more worrying is that the same thing could happen in other experiments. The rumor is that the ATLAS and CMS collaborations are already checking whether the 750 GeV diphoton resonance signal could also be injected by a hacker.
Thursday, 17 March 2016
Diphoton update
Today at the Moriond conference ATLAS and CMS updated their diphoton resonance searches. There's been a rumor of an ATLAS analysis with looser cuts on the photons where the significance of the 750 GeV excess grows to a whopping 4.7 sigma. The rumor had it that the this analysis would be made public today, so the expectations were high. However, the loose-cuts analysis was not approved in time by the collaboration, and the fireworks display was cancelled. In any case, there was some good news today, and some useful info for model builders was provided.
Let's start with ATLAS. For the 13 TeV results, they now have two analyses: one called spin-0 and one called spin-2. Naively, the cuts in the latter are not optimized not for a spin-2 resonance but rather for a high-mass resonance (where there's currently no significant excess), so the spin-2 label should not be treated too seriously in this case. Both analyses show a similar excess at 750 GeV: 3.9 and 3.6 sigma respectively for a wide resonance. Moreover, ATLAS provides additional information about the diphoton events, such as the angular distribution of the photons, the number of accompanying jets, the amount of missing energy, etc. This may be very useful for theorists entertaining less trivial models, for example when the 750 GeV resonance is produced from a decay of a heavier parent particle. Finally, ATLAS shows a re-analysis of the diphoton events collected at 8 TeV center-of-energy of the LHC. The former run-1 analysis was a bit sloppy in the interesting mass range; for example, no limits at all were given for a 750 GeV scalar hypothesis. Now the run-1 data have been cleaned up and analyzed using the same methods as in run-2. Excitingly, there's a 2 sigma excess in the spin-0 analysis in run-1, roughly compatible with what one would expect given the observed run-2 excess! No significant excess is seen for the spin-2 analysis, and the tension between the run-1 and run-2 data is quite severe in this case. Unfortunately, ATLAS does not quote the combined significance and the best fit cross section for the 750 GeV resonance.
For CMS, the big news is that the amount of 13 TeV data at their disposal has increased by 20%. Using MacGyver skills, they managed to make sense of the chunk of data collected when the CMS magnet was off due to a technical problem. Apparently it was worth it, as new diphoton events have been found in the 750 GeV ballpark. Thanks to that, and a better calibration, the significance of the diphoton excess in run-2 actually increases up to 2.9 sigma! Furthermore, much like ATLAS, CMS updated their run-1 diphoton analyses and combined them with the run-2 ones. Again, the combination increases the significance of the 750 GeV excess. The combined significance quoted by CMS is 3.4 sigma, similar for spin-0 and spin-2 analyses. Unlike in ATLAS, the best fit is for a narrow resonance, which is the more preferred option from the theoretical point of view.
In summary, the diphoton excess survived the first test. After adding more data and improving the analysis techniques the significance slightly increases rather than decreases, as expected for a real particle. The signal is now a bit more solid: both experiments have a similar amount of diphoton data and they both claim a similar significance of the 750 GeV bump. It may be a good moment to rename the ATLAS diphoton excess as the LHC diphoton excess :) So far, the story of 2012 is repeating itself: the initial hints of a new resonance solidify into a consistent picture. Are we going to have another huge discovery this summer?
Let's start with ATLAS. For the 13 TeV results, they now have two analyses: one called spin-0 and one called spin-2. Naively, the cuts in the latter are not optimized not for a spin-2 resonance but rather for a high-mass resonance (where there's currently no significant excess), so the spin-2 label should not be treated too seriously in this case. Both analyses show a similar excess at 750 GeV: 3.9 and 3.6 sigma respectively for a wide resonance. Moreover, ATLAS provides additional information about the diphoton events, such as the angular distribution of the photons, the number of accompanying jets, the amount of missing energy, etc. This may be very useful for theorists entertaining less trivial models, for example when the 750 GeV resonance is produced from a decay of a heavier parent particle. Finally, ATLAS shows a re-analysis of the diphoton events collected at 8 TeV center-of-energy of the LHC. The former run-1 analysis was a bit sloppy in the interesting mass range; for example, no limits at all were given for a 750 GeV scalar hypothesis. Now the run-1 data have been cleaned up and analyzed using the same methods as in run-2. Excitingly, there's a 2 sigma excess in the spin-0 analysis in run-1, roughly compatible with what one would expect given the observed run-2 excess! No significant excess is seen for the spin-2 analysis, and the tension between the run-1 and run-2 data is quite severe in this case. Unfortunately, ATLAS does not quote the combined significance and the best fit cross section for the 750 GeV resonance.
For CMS, the big news is that the amount of 13 TeV data at their disposal has increased by 20%. Using MacGyver skills, they managed to make sense of the chunk of data collected when the CMS magnet was off due to a technical problem. Apparently it was worth it, as new diphoton events have been found in the 750 GeV ballpark. Thanks to that, and a better calibration, the significance of the diphoton excess in run-2 actually increases up to 2.9 sigma! Furthermore, much like ATLAS, CMS updated their run-1 diphoton analyses and combined them with the run-2 ones. Again, the combination increases the significance of the 750 GeV excess. The combined significance quoted by CMS is 3.4 sigma, similar for spin-0 and spin-2 analyses. Unlike in ATLAS, the best fit is for a narrow resonance, which is the more preferred option from the theoretical point of view.
In summary, the diphoton excess survived the first test. After adding more data and improving the analysis techniques the significance slightly increases rather than decreases, as expected for a real particle. The signal is now a bit more solid: both experiments have a similar amount of diphoton data and they both claim a similar significance of the 750 GeV bump. It may be a good moment to rename the ATLAS diphoton excess as the LHC diphoton excess :) So far, the story of 2012 is repeating itself: the initial hints of a new resonance solidify into a consistent picture. Are we going to have another huge discovery this summer?
Friday, 11 March 2016
750 GeV: the bigger picture
This Thursday the ATLAS and CMS experiments will present updated analyses of the 750 GeV diphoton excess. CMS will extend their data set by the diphoton events collected in the periods when the detector was running without the magnetic field (which is not essential for this particular study), so the amount of available data will slightly increase. We will then enter the Phase-II of the excitement protocol, hopefully followed this summer by another 4-th-of-July-style discovery. To close the Phase-I, here's a long-promised post about the bigger picture. There's at least 750 distinct models that can accommodate the diphoton signal observed by ATLAS and CMS. However, a larger framework for physics beyond the Standard Model it which these phenomenological models can be embedded is a more tricky question. Here is a bunch of speculations.
Whenever a new fluctuation is spotted at the LHC one cannot avoid mentioning supersymmetry. However, the 750 GeV resonance cannot be naturally interpreted in this framework, not the least because it cannot be identified as a superpartner of any known particles. The problem is that explaining the observed signal strength requires introducing new particles with large couplings, and the complete theory typically enters into a strong coupling regime at the energy scale of a few TeV. This is not the usual SUSY paradigm, with weakly coupled physics at the TeV scale followed by a desert up to the grand unification scale. Thus, even if the final answer may still turn out to be supersymmetric, it will not be the kind of SUSY we've been expecting all along. Weakly coupled supersymmetric explanations are still possible in somewhat more complicated scenarios with new very light sub-GeV particles and cascade decays, see e.g. this NMSSM model.
Each time you see a diphoton peak you want to cry Higgs, since this is how the 125 GeV Higgs boson was first spotted. Many theories predict an extended Higgs sector with multiple heavy scalar particles, but again such a framework is not the most natural one for interpreting the 750 GeV resonance. There are two main reasons. One is that different Higgs scalars typically mix, but the mixing angle in this case is severely constrained by Higgs precision studies and non-observation of 750 GeV diboson resonances in other channel. The other is that, for a 750 GeV Higgs scalar, the branching fraction into the diphoton final state is typically tiny (e.g., ~10^-7 for a Standard-Model-Higgs-like scalar) and a complicated model gymnastics is needed to enhance it. The possibility that the 750 GeV resonance is a heavy Higgs boson is by no means excluded, but I would be surprised if this were the case.
It is more tempting to interpret the diphoton resonance as a bound state of new strong interactions with a confinement scale in the TeV range. We know that the Quantum Chromodynamics (QCD) theory, which describes the strong interactions of the Standard Model quarks, gives rise to many scalar mesons and higher-spin resonances at low energies. Such a behavior is characteristic for a large class of similar theories. Furthermore, if the new strong sector contains mediator particles that carry color and electromagnetic charges, the production in gluon fusion and decay into photons is possible for the composite states, see e.g. here. The problem is that, much as for QCD, one would expect not one but an entire battalion of resonances. One needs to understand how the remaining resonances predicted by typical strongly interacting models could have avoided detection so far.
One way this could happen is if the 750 GeV resonance is a scalar that, for symmetry reasons, is much lighter than most of the particles in the strong sector. Here again our QCD may offer us a clue, as it contains pseudo-scalar particles, the so-called pions, which are a factor of 10 lighter than the typical mass scale of other resonances. In QCD, pions are Goldstone bosons of the chiral symmetry spontaneously broken by the vacuum quark condensate. In other words, the smallness of the pion mass is protected by a symmetry, and general theorems worked out in the 60s ensure the quantum stability of such an arrangement. The similar mechanism can be easily implemented in other strongly interacting theories, and it is possible to realize the 750 GeV resonance as a new kind of pion, see e.g. here. Even the mechanism for decaying into photons -- via chiral anomalies -- can be borrowed directly from QCD. However, the symmetry protecting the 750 GeV scalar could also be completely different that the ones we have seen so far. One example is the dilaton, that is a Goldstone boson of a spontaneously broken conformal symmetry, see e.g. here. This is a theoretically interesting possibility, since approximate conformal symmetry often arises as a feature of strongly interacting theories. All in all, the 750 GeV particle may well be a pion or dilaton harbinger of new strong interactions at a TeV scale. One can then further speculate that the Higgs boson also originates from that sector, but that is a separate story that may or may not be true.
Another larger framework worth mentioning here is that of extra dimensions. In the modern view, theories with the new 4th dimension of space are merely an effective description of strongly interacting sectors discussed above. For example, the famous Randall-Sundrum model, with the Standard Model living in a section of a 5D AdS5 space, is a weakly coupled dual description of strongly coupled theories with a conformal symmetry and a large N gauge symmetry. These models thus offer a calculable way to embed the 750 GeV resonance in a strongly interacting theory. For example, the dilaton can be effectively described in the Randall-Sundrum model as the radion - a scalar particle corresponding to fluctuations of the size of the 5th dimension, see e.g. here. Moreover, the Randall-Sundrum framework provides a simple way to realize the 750 GeV particle as a spin-2 resonance. Indeed, the model always contains massive Kaluza-Klein excitations of the graviton, whose couplings to matter can be much stronger than that of the massless graviton. This possibility have been relatively less explored so far, see e.g. here, but that may change next week...
Clearly, it is impossible to say anything conclusive at this point. More data in multiple decay channels is absolutely necessary for a more concrete picture to emerge. For me personally, a confirmation of the 750 GeV excess would be a strong hint for new strong interactions at a few TeV scale. And if this is indeed the case, one may seriously think that our 40-years-long brooding about the hierarchy problem has not been completely misguided...
Whenever a new fluctuation is spotted at the LHC one cannot avoid mentioning supersymmetry. However, the 750 GeV resonance cannot be naturally interpreted in this framework, not the least because it cannot be identified as a superpartner of any known particles. The problem is that explaining the observed signal strength requires introducing new particles with large couplings, and the complete theory typically enters into a strong coupling regime at the energy scale of a few TeV. This is not the usual SUSY paradigm, with weakly coupled physics at the TeV scale followed by a desert up to the grand unification scale. Thus, even if the final answer may still turn out to be supersymmetric, it will not be the kind of SUSY we've been expecting all along. Weakly coupled supersymmetric explanations are still possible in somewhat more complicated scenarios with new very light sub-GeV particles and cascade decays, see e.g. this NMSSM model.
Each time you see a diphoton peak you want to cry Higgs, since this is how the 125 GeV Higgs boson was first spotted. Many theories predict an extended Higgs sector with multiple heavy scalar particles, but again such a framework is not the most natural one for interpreting the 750 GeV resonance. There are two main reasons. One is that different Higgs scalars typically mix, but the mixing angle in this case is severely constrained by Higgs precision studies and non-observation of 750 GeV diboson resonances in other channel. The other is that, for a 750 GeV Higgs scalar, the branching fraction into the diphoton final state is typically tiny (e.g., ~10^-7 for a Standard-Model-Higgs-like scalar) and a complicated model gymnastics is needed to enhance it. The possibility that the 750 GeV resonance is a heavy Higgs boson is by no means excluded, but I would be surprised if this were the case.
It is more tempting to interpret the diphoton resonance as a bound state of new strong interactions with a confinement scale in the TeV range. We know that the Quantum Chromodynamics (QCD) theory, which describes the strong interactions of the Standard Model quarks, gives rise to many scalar mesons and higher-spin resonances at low energies. Such a behavior is characteristic for a large class of similar theories. Furthermore, if the new strong sector contains mediator particles that carry color and electromagnetic charges, the production in gluon fusion and decay into photons is possible for the composite states, see e.g. here. The problem is that, much as for QCD, one would expect not one but an entire battalion of resonances. One needs to understand how the remaining resonances predicted by typical strongly interacting models could have avoided detection so far.
One way this could happen is if the 750 GeV resonance is a scalar that, for symmetry reasons, is much lighter than most of the particles in the strong sector. Here again our QCD may offer us a clue, as it contains pseudo-scalar particles, the so-called pions, which are a factor of 10 lighter than the typical mass scale of other resonances. In QCD, pions are Goldstone bosons of the chiral symmetry spontaneously broken by the vacuum quark condensate. In other words, the smallness of the pion mass is protected by a symmetry, and general theorems worked out in the 60s ensure the quantum stability of such an arrangement. The similar mechanism can be easily implemented in other strongly interacting theories, and it is possible to realize the 750 GeV resonance as a new kind of pion, see e.g. here. Even the mechanism for decaying into photons -- via chiral anomalies -- can be borrowed directly from QCD. However, the symmetry protecting the 750 GeV scalar could also be completely different that the ones we have seen so far. One example is the dilaton, that is a Goldstone boson of a spontaneously broken conformal symmetry, see e.g. here. This is a theoretically interesting possibility, since approximate conformal symmetry often arises as a feature of strongly interacting theories. All in all, the 750 GeV particle may well be a pion or dilaton harbinger of new strong interactions at a TeV scale. One can then further speculate that the Higgs boson also originates from that sector, but that is a separate story that may or may not be true.
Another larger framework worth mentioning here is that of extra dimensions. In the modern view, theories with the new 4th dimension of space are merely an effective description of strongly interacting sectors discussed above. For example, the famous Randall-Sundrum model, with the Standard Model living in a section of a 5D AdS5 space, is a weakly coupled dual description of strongly coupled theories with a conformal symmetry and a large N gauge symmetry. These models thus offer a calculable way to embed the 750 GeV resonance in a strongly interacting theory. For example, the dilaton can be effectively described in the Randall-Sundrum model as the radion - a scalar particle corresponding to fluctuations of the size of the 5th dimension, see e.g. here. Moreover, the Randall-Sundrum framework provides a simple way to realize the 750 GeV particle as a spin-2 resonance. Indeed, the model always contains massive Kaluza-Klein excitations of the graviton, whose couplings to matter can be much stronger than that of the massless graviton. This possibility have been relatively less explored so far, see e.g. here, but that may change next week...
Clearly, it is impossible to say anything conclusive at this point. More data in multiple decay channels is absolutely necessary for a more concrete picture to emerge. For me personally, a confirmation of the 750 GeV excess would be a strong hint for new strong interactions at a few TeV scale. And if this is indeed the case, one may seriously think that our 40-years-long brooding about the hierarchy problem has not been completely misguided...
Thursday, 11 February 2016
LIGO: what's in it for us?
I mean us theoretical particle physicists. With this constraint, the short answer is not much. Of course, every human being must experience shock and awe when picturing the phenomenon observed by LIGO. Two black holes spiraling into each other and merging in a cataclysmic event which releases energy equivalent to 3 solar masses within a fraction of a second.... Besides, one can only admire the ingenuity that allows us to detect here on Earth a disturbance of the gravitational field created in a galaxy 1.3 billion light years away. In more practical terms, the LIGO announcement starts the era of gravitational wave astronomy and thus opens a new window on the universe. In particular, LIGO's discovery is a first ever observation of a black hole binary, and we should soon learn more about the ubiquity of astrophysical systems containing one or more black holes. Furthermore, it is possible that we will discover completely new objects whose existence we don't even suspect. Still, all of the above is what I fondly call dirty astrophysics on this blog, and it does not touch upon any fundamental issues. What are the prospects for learning something new about those?
In the long run, I think we can be cautiously optimistic. While we haven't learned anything unexpected from today's LIGO announcement, progress in gravitational wave astronomy should eventually teach us something about fundamental physics. First of all, advances in astronomy, inevitably brought by this new experimental technique, will allow us to better measure the basic parameters of the universe. This in turn will provide us information about aspects of fundamental physics that can affect the entire universe, such as e.g. the dark energy. Moreover, by observing phenomena occurring in strong gravitational fields and of which signals propagate over large distances, we can place constraints on modifications of Einstein gravity such as the graviton mass (on the downside, often there is no consistent alternative theory that can be constrained).
Closer to our hearts, one potential source of gravitational waves is a strongly first-order phase transition. Such an event may have occurred as the early universe was cooling down. Below a certain critical temperature a symmetric phase of the high-energy theory may no longer be energetically preferred, and the universe enters a new phase where the symmetry is broken. If the transition is violent (strongly first-order in the physics jargon), bubbles of the new phase emerge, expand, and collide, until they fill the entire visible universe. Such a dramatic event produces gravitational waves with the amplitude that may be observable by future experiments. Two examples of phase transitions we suspect to have occurred are the QCD phase transition around T=100 MeV, and the electroweak phase transition around T=100 GeV. The Standard Model predicts that neither is first order, however new physics beyond the Standard Model may change that conclusion. Many examples of required new physics have been proposed to modify the electroweak phase transition, for example models with additional Higgs scalars, or with warped extra dimensions. Moreover, the phase transition could be related to symmetry breaking in a hidden sector that is very weakly or not at all coupled (except via gravity) to ordinary matter. Therefore, by observing or putting limits on phase transitions in the early universe we will obtain complementary information about the fundamental theory at high energies.
Gravitational waves from phase transitions are typically predicted to peak at frequencies much smaller than the ones probed by LIGO (35 to 250 Hz). The next generation of gravitational telescopes will be more equipped to detect such a signal thanks to a much larger arm-length (see figure borrowed from here). This concerns especially the eLISA space interferometer which will probe millihertz frequencies. Even smaller frequencies can be probed by pulsar timing arrays which search for signals of gravitational waves using stable pulsars for an antenna. The worry is that the interesting signal may be obscured by astrophysical backgrounds, such as (oh horror) gravitational wave emission from white dwarf binaries. Another interesting beacon for future experiments is to detect gravitational waves from inflation (almost discovered 2 years ago via another method by the BICEP collaboration). However, given the constraints from the CMB observations, the inflation signal may well be too weak even for the future giant space interferometers like DECIGO or BBO.
To summarize, the importance of the LIGO discovery for the field of particle physics is mostly the boost it gives to further experimental efforts in this direction. Hopefully, the eLISA project will now take off, and other ideas will emerge. Once gravitational wave experiments become sensitive to sub-Hertz frequencies, they will start probing the parameter space of interesting theories beyond the Standard Model.
Thanks YouTube! It's the first time I see a webcast of a highly anticipated event running smoothly in spite of 100000 viewers. This can be contrasted with Physical Review Letters who struggled to make one damn pdf file accessible ;)
In the long run, I think we can be cautiously optimistic. While we haven't learned anything unexpected from today's LIGO announcement, progress in gravitational wave astronomy should eventually teach us something about fundamental physics. First of all, advances in astronomy, inevitably brought by this new experimental technique, will allow us to better measure the basic parameters of the universe. This in turn will provide us information about aspects of fundamental physics that can affect the entire universe, such as e.g. the dark energy. Moreover, by observing phenomena occurring in strong gravitational fields and of which signals propagate over large distances, we can place constraints on modifications of Einstein gravity such as the graviton mass (on the downside, often there is no consistent alternative theory that can be constrained).
Closer to our hearts, one potential source of gravitational waves is a strongly first-order phase transition. Such an event may have occurred as the early universe was cooling down. Below a certain critical temperature a symmetric phase of the high-energy theory may no longer be energetically preferred, and the universe enters a new phase where the symmetry is broken. If the transition is violent (strongly first-order in the physics jargon), bubbles of the new phase emerge, expand, and collide, until they fill the entire visible universe. Such a dramatic event produces gravitational waves with the amplitude that may be observable by future experiments. Two examples of phase transitions we suspect to have occurred are the QCD phase transition around T=100 MeV, and the electroweak phase transition around T=100 GeV. The Standard Model predicts that neither is first order, however new physics beyond the Standard Model may change that conclusion. Many examples of required new physics have been proposed to modify the electroweak phase transition, for example models with additional Higgs scalars, or with warped extra dimensions. Moreover, the phase transition could be related to symmetry breaking in a hidden sector that is very weakly or not at all coupled (except via gravity) to ordinary matter. Therefore, by observing or putting limits on phase transitions in the early universe we will obtain complementary information about the fundamental theory at high energies.
Gravitational waves from phase transitions are typically predicted to peak at frequencies much smaller than the ones probed by LIGO (35 to 250 Hz). The next generation of gravitational telescopes will be more equipped to detect such a signal thanks to a much larger arm-length (see figure borrowed from here). This concerns especially the eLISA space interferometer which will probe millihertz frequencies. Even smaller frequencies can be probed by pulsar timing arrays which search for signals of gravitational waves using stable pulsars for an antenna. The worry is that the interesting signal may be obscured by astrophysical backgrounds, such as (oh horror) gravitational wave emission from white dwarf binaries. Another interesting beacon for future experiments is to detect gravitational waves from inflation (almost discovered 2 years ago via another method by the BICEP collaboration). However, given the constraints from the CMB observations, the inflation signal may well be too weak even for the future giant space interferometers like DECIGO or BBO.
To summarize, the importance of the LIGO discovery for the field of particle physics is mostly the boost it gives to further experimental efforts in this direction. Hopefully, the eLISA project will now take off, and other ideas will emerge. Once gravitational wave experiments become sensitive to sub-Hertz frequencies, they will start probing the parameter space of interesting theories beyond the Standard Model.
Thanks YouTube! It's the first time I see a webcast of a highly anticipated event running smoothly in spite of 100000 viewers. This can be contrasted with Physical Review Letters who struggled to make one damn pdf file accessible ;)
Sunday, 31 January 2016
750 ways to leave your lover
A new paper last week straightens out the story of the diphoton background in ATLAS. Some confusion was created because theorists misinterpreted the procedures described in the ATLAS conference note, which could lead to a different estimate of the significance of the 750 GeV excess. However, once the correct phenomenological and statistical approach is adopted, the significance quoted by ATLAS can be reproduced, up to small differences due to incomplete information available in public documents. Anyway, now that this is all behind, we can safely continue being excited at least until summer. Today I want to discuss different interpretations of the diphoton bump observed by ATLAS. I will take a purely phenomenological point of view, leaving for the next time the question of a bigger picture that the resonance may fit into.
Phenomenologically, the most straightforward interpretation is the so-called everyone's model: a 750 GeV singlet scalar particle produced in gluon fusion and decaying to photons via loops of new vector-like quarks. This simple construction perfectly explains all publicly available data, and can be easily embedded in more sophisticated models. Nevertheless, many more possibilities were pointed out in the 750 papers so far, and here I review a few that I find most interesting.
Spin Zero or More?
For a particle decaying to two photons, there is not that many possibilities: the resonance has to be a boson and, according to young Landau's theorem, it cannot have spin 1. This leaves at the table spin 0, 2, or higher. Spin-2 is an interesting hypothesis, as this kind of excitations is predicted in popular models like the Randall-Sundrum one. Higher-than-two spins are disfavored theoretically. When more data is collected, the spin of the 750 GeV resonance can be tested by looking at the angular distribution of the photons. The rumor is that the data so far somewhat favor spin-2 over spin-0, although the statistics is certainly insufficient for any serious conclusions. Concerning the parity, it is practically impossible to determine it by studying the diphoton final state, and both the scalar and the pseudoscalar option are equally viable at present. Discrimination may be possible in the future, but only if multi-body decay modes of the resonance are discovered. If the true final state is more complicated than two photons (see below), then the 750 GeV resonance may have any spin, including spin-1 and spin-1/2.
Narrow or Wide?
The total width is an inverse of particle's lifetime (in our funny units). From the experimental point of view, the width larger than detector's energy resolution will show up as a smearing of the resonance due to the uncertainty principle. Currently, the ATLAS run-2 data prefer the width 10 times larger than the experimental resolution (which is about 5 GeV in this energy ballpark), although the preference is not very strong in the statistical sense. On the other hand, from the theoretical point of view, it is much easier to construct models where the 750 GeV resonance is a narrow particle. Therefore, confirmation of the large width would have profound consequences, as it would significantly narrow down the scope of viable models. The most exciting interpretation would then be that the resonance is a portal to a dark sector containing new light particles very weakly coupled to ordinary matter.
How many resonances?
One resonance is enough, but a family of resonances tightly packed around 750 GeV may also explain the data. As a bonus, this could explain the seemingly large width without opening new dangerous decay channels. It is quite natural for particles to come in multiplets with similar masses: our pion is an example where the small mass splitting π± and π0 arises due to electromagnetic quantum corrections. For Higgs-like multiplets the small splitting may naturally arise after electroweak symmetry breaking, and the familiar 2-Higgs doublet model offers a simple realization. If the mass splitting of the multiplet is larger than the experimental resolution, this possibility can tested by precisely measuring the profile of the resonance and searching for a departure from the Breit-Wigner shape. On the other side of the spectrum is the idea is that there is no resonance at all at 750 GeV, but rather at another mass, and the bump at 750 GeV appears due to some kinematical accidents.
Who made it?
The most plausible production process is definitely the gluon-gluon fusion. Production in collisions of light quark and antiquarks is also theoretically sound, however it leads to a more acute tension between run-2 and run-1 data. Indeed, even for the gluon fusion, the production cross section of a 750 GeV resonance in 13 TeV proton collisions is only 5 times larger than at 8 TeV. Given the larger amount of data collected in run-1, we would expect a similar excess there, contrary to observations. For a resonance produced from u-ubar or d-dbar the analogous ratio is only 2.5 (see the table), leading to much more tension. The ratio climbs back to 5 if the initial state contains the heavier quarks: strange, charm, or bottom (which can also be found sometimes inside a proton), however I haven't seen yet a neat model that makes use of that. Another possibility is to produce the resonance via photon-photon collisions. This way one could cook up a truly minimal and very predictive model where the resonance couples only to photons of all the Standard Model particles. However, in this case, the ratio between 13 and 8 TeV cross section is very unfavorable, merely a factor of 2, and the run-1 vs run-2 tension comes back with more force. More options open up when associated production (e.g. with t-tbar, or in vector boson fusion) is considered. The problem with these ideas is that, according to what was revealed during the talk last December, there isn't any additional energetic particles in the diphoton events. Similar problems are facing models where the 750 GeV resonance appears as a decay product of a heavier resonance, although in this case some clever engineering or fine-tuning may help to hide the additional particles from experimentalist's eyes.
Two-body or more?
While a simple two-body decay of the resonance into two photons is a perfectly plausible explanation of all existing data, a number of interesting alternatives have been suggested. For example, the decay could be 3-body, with another soft visible or invisible particle accompanying two photons. If the masses of all particles involved are chosen appropriately, the invariant mass spectrum of the diphoton remains sharply peaked. At the same time, a broadening of the diphoton energy due to the 3-body kinematics may explain why the resonance appears wide in ATLAS. Another possibility is a cascade decay into 4 photons. If the intermediate particles are very light, then the pairs of photons from their decay are very collimated and may look like a single photon in the detector.
♬ The problem is all inside your head ♬ and the possibilities are endless. The situation is completely different than during the process of discovering the Higgs boson, where one strongly favored hypothesis was tested against more exotic ideas. Of course, the first and foremost question is whether the excess is really new physics, or just a nasty statistical fluctuation. But if that is confirmed, the next crucial task for experimentalists will be to establish the nature of the resonance and get model builders on the right track. ♬ The answer is easy if you take it logically ♬
All ideas discussed above appeared in recent articles by various authors addressing the 750 GeV excess. If I were to include all references the post would be just one giant hyperlink, so you need to browse the literature yourself to find the original references.
Phenomenologically, the most straightforward interpretation is the so-called everyone's model: a 750 GeV singlet scalar particle produced in gluon fusion and decaying to photons via loops of new vector-like quarks. This simple construction perfectly explains all publicly available data, and can be easily embedded in more sophisticated models. Nevertheless, many more possibilities were pointed out in the 750 papers so far, and here I review a few that I find most interesting.
Spin Zero or More?
For a particle decaying to two photons, there is not that many possibilities: the resonance has to be a boson and, according to young Landau's theorem, it cannot have spin 1. This leaves at the table spin 0, 2, or higher. Spin-2 is an interesting hypothesis, as this kind of excitations is predicted in popular models like the Randall-Sundrum one. Higher-than-two spins are disfavored theoretically. When more data is collected, the spin of the 750 GeV resonance can be tested by looking at the angular distribution of the photons. The rumor is that the data so far somewhat favor spin-2 over spin-0, although the statistics is certainly insufficient for any serious conclusions. Concerning the parity, it is practically impossible to determine it by studying the diphoton final state, and both the scalar and the pseudoscalar option are equally viable at present. Discrimination may be possible in the future, but only if multi-body decay modes of the resonance are discovered. If the true final state is more complicated than two photons (see below), then the 750 GeV resonance may have any spin, including spin-1 and spin-1/2.
Narrow or Wide?
The total width is an inverse of particle's lifetime (in our funny units). From the experimental point of view, the width larger than detector's energy resolution will show up as a smearing of the resonance due to the uncertainty principle. Currently, the ATLAS run-2 data prefer the width 10 times larger than the experimental resolution (which is about 5 GeV in this energy ballpark), although the preference is not very strong in the statistical sense. On the other hand, from the theoretical point of view, it is much easier to construct models where the 750 GeV resonance is a narrow particle. Therefore, confirmation of the large width would have profound consequences, as it would significantly narrow down the scope of viable models. The most exciting interpretation would then be that the resonance is a portal to a dark sector containing new light particles very weakly coupled to ordinary matter.
How many resonances?
One resonance is enough, but a family of resonances tightly packed around 750 GeV may also explain the data. As a bonus, this could explain the seemingly large width without opening new dangerous decay channels. It is quite natural for particles to come in multiplets with similar masses: our pion is an example where the small mass splitting π± and π0 arises due to electromagnetic quantum corrections. For Higgs-like multiplets the small splitting may naturally arise after electroweak symmetry breaking, and the familiar 2-Higgs doublet model offers a simple realization. If the mass splitting of the multiplet is larger than the experimental resolution, this possibility can tested by precisely measuring the profile of the resonance and searching for a departure from the Breit-Wigner shape. On the other side of the spectrum is the idea is that there is no resonance at all at 750 GeV, but rather at another mass, and the bump at 750 GeV appears due to some kinematical accidents.
Who made it?
The most plausible production process is definitely the gluon-gluon fusion. Production in collisions of light quark and antiquarks is also theoretically sound, however it leads to a more acute tension between run-2 and run-1 data. Indeed, even for the gluon fusion, the production cross section of a 750 GeV resonance in 13 TeV proton collisions is only 5 times larger than at 8 TeV. Given the larger amount of data collected in run-1, we would expect a similar excess there, contrary to observations. For a resonance produced from u-ubar or d-dbar the analogous ratio is only 2.5 (see the table), leading to much more tension. The ratio climbs back to 5 if the initial state contains the heavier quarks: strange, charm, or bottom (which can also be found sometimes inside a proton), however I haven't seen yet a neat model that makes use of that. Another possibility is to produce the resonance via photon-photon collisions. This way one could cook up a truly minimal and very predictive model where the resonance couples only to photons of all the Standard Model particles. However, in this case, the ratio between 13 and 8 TeV cross section is very unfavorable, merely a factor of 2, and the run-1 vs run-2 tension comes back with more force. More options open up when associated production (e.g. with t-tbar, or in vector boson fusion) is considered. The problem with these ideas is that, according to what was revealed during the talk last December, there isn't any additional energetic particles in the diphoton events. Similar problems are facing models where the 750 GeV resonance appears as a decay product of a heavier resonance, although in this case some clever engineering or fine-tuning may help to hide the additional particles from experimentalist's eyes.
Two-body or more?
While a simple two-body decay of the resonance into two photons is a perfectly plausible explanation of all existing data, a number of interesting alternatives have been suggested. For example, the decay could be 3-body, with another soft visible or invisible particle accompanying two photons. If the masses of all particles involved are chosen appropriately, the invariant mass spectrum of the diphoton remains sharply peaked. At the same time, a broadening of the diphoton energy due to the 3-body kinematics may explain why the resonance appears wide in ATLAS. Another possibility is a cascade decay into 4 photons. If the intermediate particles are very light, then the pairs of photons from their decay are very collimated and may look like a single photon in the detector.
♬ The problem is all inside your head ♬ and the possibilities are endless. The situation is completely different than during the process of discovering the Higgs boson, where one strongly favored hypothesis was tested against more exotic ideas. Of course, the first and foremost question is whether the excess is really new physics, or just a nasty statistical fluctuation. But if that is confirmed, the next crucial task for experimentalists will be to establish the nature of the resonance and get model builders on the right track. ♬ The answer is easy if you take it logically ♬
All ideas discussed above appeared in recent articles by various authors addressing the 750 GeV excess. If I were to include all references the post would be just one giant hyperlink, so you need to browse the literature yourself to find the original references.
Friday, 22 January 2016
Higgs force awakens
The Higgs boson couples to particles that constitute matter around us, such as electrons, protons, and neutrons. Its virtual quanta are constantly being exchanged between these particles. In other words, it gives rise to a force - the Higgs force. I'm surprised why this PR-cool aspect is not explored in our outreach efforts. Higgs bosons mediate the Higgs force in the same fashion as gravitons, gluons, photons, W and Z bosons mediate the gravity, strong, electromagnetic, and weak forces. Just like gravity, the Higgs force is always attractive and its strength is proportional, in the first approximation, to particle's mass. It is a force in a common sense; for example, if we bombarded long enough a detector with a beam of particles interacting only via the Higgs force, they would eventually knock off atoms in the detector.
There is of course a reason why the Higgs force is less discussed: it has never been detected directly. Indeed, in the absence of midi-chlorians it is extremely weak. First, it shares the feature of the weak interactions of being short-ranged: since the mediator is massive, the interaction strength is exponentially suppressed at distances larger than an attometer (10^-18 m), about 0.1% of the diameter of a proton. Moreover, for ordinary matter, the weak force is more important because of the tiny Higgs couplings to light quarks and electrons. For example, for the proton the Higgs force is thousand times weaker than the weak force, and for the electron it is hundred thousand times weaker. Finally, there are no known particles interacting only via the Higgs force and gravity (though dark matter in some hypothetical models has this property), so in practice the Higgs force is always a tiny correction to more powerful forces that shape the structure of atoms and nuclei. This is again in contrast to the weak force, which is particularly relevant for neutrinos who are immune to strong and electromagnetic forces.
Nevertheless, this new paper argues that the situation is not hopeless, and that the current experimental sensitivity is good enough to start probing the Higgs force. The authors propose to do it by means of atom spectroscopy. Frequency measurements of atomic transitions have reached the stunning accuracy of order 10^-18. The Higgs force creates a Yukawa type potential between the nucleus and orbiting electrons, which leads to a shift of the atomic levels. The effect is tiny, in particular it is always smaller than the analogous shift due to the weak force. This is a serious problem, because calculations of the leading effects may not be accurate enough to extract the subleading Higgs contribution. Fortunately, there may be tricks to reduce the uncertainties. One is to measure how the isotope shift of transition frequencies for several isotope pairs. The theory says that the leading atomic interactions should give rise to a universal linear relation (the so-called King's relation) between isotope shifts for different transitions. The Higgs and weak interactions should lead to a violation of King's relation. Given many uncertainties plaguing calculations of atomic levels, it may still be difficult to ever claim a detection of the Higgs force. More realistically, one can try to set limits on the Higgs couplings to light fermions which will be better than the current collider limits.
Atomic spectroscopy is way above my head, so I cannot judge if the proposal is realistic. There are a few practical issues to resolve before the Higgs force is mastered into a lightsaber. However, it is possible that a new front to study the Higgs boson will be opened in the near future. These studies will provide information about the Higgs couplings to light Standard Model fermions, which is complementary to the information obtained from collider searches.
There is of course a reason why the Higgs force is less discussed: it has never been detected directly. Indeed, in the absence of midi-chlorians it is extremely weak. First, it shares the feature of the weak interactions of being short-ranged: since the mediator is massive, the interaction strength is exponentially suppressed at distances larger than an attometer (10^-18 m), about 0.1% of the diameter of a proton. Moreover, for ordinary matter, the weak force is more important because of the tiny Higgs couplings to light quarks and electrons. For example, for the proton the Higgs force is thousand times weaker than the weak force, and for the electron it is hundred thousand times weaker. Finally, there are no known particles interacting only via the Higgs force and gravity (though dark matter in some hypothetical models has this property), so in practice the Higgs force is always a tiny correction to more powerful forces that shape the structure of atoms and nuclei. This is again in contrast to the weak force, which is particularly relevant for neutrinos who are immune to strong and electromagnetic forces.
Nevertheless, this new paper argues that the situation is not hopeless, and that the current experimental sensitivity is good enough to start probing the Higgs force. The authors propose to do it by means of atom spectroscopy. Frequency measurements of atomic transitions have reached the stunning accuracy of order 10^-18. The Higgs force creates a Yukawa type potential between the nucleus and orbiting electrons, which leads to a shift of the atomic levels. The effect is tiny, in particular it is always smaller than the analogous shift due to the weak force. This is a serious problem, because calculations of the leading effects may not be accurate enough to extract the subleading Higgs contribution. Fortunately, there may be tricks to reduce the uncertainties. One is to measure how the isotope shift of transition frequencies for several isotope pairs. The theory says that the leading atomic interactions should give rise to a universal linear relation (the so-called King's relation) between isotope shifts for different transitions. The Higgs and weak interactions should lead to a violation of King's relation. Given many uncertainties plaguing calculations of atomic levels, it may still be difficult to ever claim a detection of the Higgs force. More realistically, one can try to set limits on the Higgs couplings to light fermions which will be better than the current collider limits.
Atomic spectroscopy is way above my head, so I cannot judge if the proposal is realistic. There are a few practical issues to resolve before the Higgs force is mastered into a lightsaber. However, it is possible that a new front to study the Higgs boson will be opened in the near future. These studies will provide information about the Higgs couplings to light Standard Model fermions, which is complementary to the information obtained from collider searches.
Sunday, 17 January 2016
Gunpowder Plot: foiled
Just a week ago I hailed the new king, and already there was an assassination attempt. A new paper claims that the statistical significance of the 750 GeV diphoton excess is merely 2 sigma local. The story is being widely discussed in the corridors and comment sections because we all like to watch things die... The assassins used this plot:
The Standard Model prediction for the diphoton background at the LHC is difficult to calculate from first principles. Therefore, the ATLAS collaboration assumes a theoretically motivated functional form for this background as a function of the diphoton invariant mass. The ansatz contains a number of free parameters, which are then fitted using the data in the entire analyzed range of invariant masses. This procedure leads to the prediction represented by the dashed line in the plot (but see later). The new paper assumes a slightly more complicated functional form with more free parameters, such that the slope of the background is allowed to change. The authors argue that their more general ansatz provides a better fit to the entire diphoton spectrum, and moreover predicts a larger background for the large invariant masses. As a result, the significance of the 750 GeV excess decreases to an insignificant value of 2 sigma.
There are several problems with this claim. First, I'm confused why the blue line is described as the ATLAS fit, since it is clearly different than the background curve in the money-plot provided by ATLAS (Fig. 1 in ATLAS-CONF-2015-081). The true ATLAS background is above the blue line, and much closer to the black line in the peak region (edit: it seems now that the background curve plotted by ATLAS corresponds to a1=0 and one more free parameter for an overall normalization, while the paper assumes fixed normalization). Second, I cannot reproduce the significance quoted in the paper. Taking the two ATLAS bins around 750 GeV, I find 3.2 sigma excess using the true ATLAS background, and 2.6 sigma using the black line (edit: this is because my estimate is too simplistic, and the paper also takes into account the uncertainty on the background curve). Third, the postulated change of slope is difficult to justify theoretically. It would mean there is a new background component kicking in at ~500 GeV, but this does not seem to be the case in this analysis.
Finally, the problem with the black line is that it grossly overshoots the high mass tail, which is visible even to a naked eye. To be more quantitative, in the range 790-1590 GeV there are 17 diphoton events observed by ATLAS, the true ATLAS backgrounds predicts 19 events, and the black line predicts 33 events. Therefore, the background shape proposed in the paper is inconsistent with the tail at the 3 sigma level! While the alternative background choice decreases the significance at the 750 GeV peak, it simply moves (and amplifies) the tension to another place.
So, I think the plot is foiled and the claim does not stand scrutiny. The 750 GeV peak may well be just a statistical fluctuation that will go away when more data is collected, but it's unlikely to be a stupid error on the part of ATLAS. The king will live at least until summer.
The Standard Model prediction for the diphoton background at the LHC is difficult to calculate from first principles. Therefore, the ATLAS collaboration assumes a theoretically motivated functional form for this background as a function of the diphoton invariant mass. The ansatz contains a number of free parameters, which are then fitted using the data in the entire analyzed range of invariant masses. This procedure leads to the prediction represented by the dashed line in the plot (but see later). The new paper assumes a slightly more complicated functional form with more free parameters, such that the slope of the background is allowed to change. The authors argue that their more general ansatz provides a better fit to the entire diphoton spectrum, and moreover predicts a larger background for the large invariant masses. As a result, the significance of the 750 GeV excess decreases to an insignificant value of 2 sigma.
There are several problems with this claim. First, I'm confused why the blue line is described as the ATLAS fit, since it is clearly different than the background curve in the money-plot provided by ATLAS (Fig. 1 in ATLAS-CONF-2015-081). The true ATLAS background is above the blue line, and much closer to the black line in the peak region (edit: it seems now that the background curve plotted by ATLAS corresponds to a1=0 and one more free parameter for an overall normalization, while the paper assumes fixed normalization). Second, I cannot reproduce the significance quoted in the paper. Taking the two ATLAS bins around 750 GeV, I find 3.2 sigma excess using the true ATLAS background, and 2.6 sigma using the black line (edit: this is because my estimate is too simplistic, and the paper also takes into account the uncertainty on the background curve). Third, the postulated change of slope is difficult to justify theoretically. It would mean there is a new background component kicking in at ~500 GeV, but this does not seem to be the case in this analysis.
Finally, the problem with the black line is that it grossly overshoots the high mass tail, which is visible even to a naked eye. To be more quantitative, in the range 790-1590 GeV there are 17 diphoton events observed by ATLAS, the true ATLAS backgrounds predicts 19 events, and the black line predicts 33 events. Therefore, the background shape proposed in the paper is inconsistent with the tail at the 3 sigma level! While the alternative background choice decreases the significance at the 750 GeV peak, it simply moves (and amplifies) the tension to another place.
So, I think the plot is foiled and the claim does not stand scrutiny. The 750 GeV peak may well be just a statistical fluctuation that will go away when more data is collected, but it's unlikely to be a stupid error on the part of ATLAS. The king will live at least until summer.
Saturday, 9 January 2016
Weekend Plot: The king is dead (long live the king)
The new diphoton king has been discussed at length in the blogoshpere, but the late diboson king also deserves a word or two. Recall that last summer ATLAS announced a 3 sigma excess in the dijet invariant mass distribution where each jet resembles a fast moving W or Z boson decaying to a pair of quarks. This excess can be interpreted as a 2 TeV resonance decaying to a pair of W or Z bosons. For example, it could be a heavy cousin of the W boson, W' in short, decaying to a W and a Z boson. Merely a month ago this paper argued that the excess remains statistically significant after combining several different CMS and ATLAS diboson resonance run-1 analyses in hadronic and leptonic channels of W and Z decay. However, the hammer came down seconds before the diphoton excess announced: diboson resonance searches based on the LHC 13 TeV collisions data do not show anything interesting around 2 TeV. This is a serious problem for any new physics interpretation of the excess since, for this mass scale, the statistical power of the run-2 and run-1 data is comparable. The tension is summarized in this plot:
The green bars show the 1 and 2 sigma best fit cross section to the diboson excess. The one on the left takes into account only the hadronic channel in ATLAS, where the excess is most significant; the one on the right is bases on the combined run-1 data. The red lines are the limits from run-2 searches in ATLAS and CMS, scaled to 8 TeV cross sections assuming W' is produced in quark-antiquark collisions. Clearly, the best fit region for the 8 TeV data is excluded by the new 13 TeV data. I display results for the W' hypothesis, however conclusions are similar (or more pessimistic) for other hypotheses leading to WW and/or ZZ final states. All in all, the ATLAS diboson excess is not formally buried yet, but at this point any a reversal of fortune would be a miracle.
The green bars show the 1 and 2 sigma best fit cross section to the diboson excess. The one on the left takes into account only the hadronic channel in ATLAS, where the excess is most significant; the one on the right is bases on the combined run-1 data. The red lines are the limits from run-2 searches in ATLAS and CMS, scaled to 8 TeV cross sections assuming W' is produced in quark-antiquark collisions. Clearly, the best fit region for the 8 TeV data is excluded by the new 13 TeV data. I display results for the W' hypothesis, however conclusions are similar (or more pessimistic) for other hypotheses leading to WW and/or ZZ final states. All in all, the ATLAS diboson excess is not formally buried yet, but at this point any a reversal of fortune would be a miracle.
Wednesday, 6 January 2016
Do-or-die year
The year 2016 began as any other year... I mean the hangover situation in particle physics. We have a theory of fundamental interactions - the Standard Model - that we know is certainly not the final theory because it cannot account for dark matter, matter-antimatter asymmetry, and cosmic inflation. At the same time, the Standard Model perfectly describes any experiment we have performed here on Earth (up to a few outliers that can be shrugged off as statistical fluctuations)... If you're having a déjà vu, maybe it's the Monty Python sketch, or maybe because this post begins exactly the same as one written a year ago. Back then, I was optimistically assuming that the 2015 LHC operation would go better than the projections, and that some 15 inverse femtobarn of data would be collected by each experiment. That amount would have clarified our chances for a discovery in the LHC run-2, and determine whether or not we should dust off our No Future t-shirts. Instead, due to machine's hiccups during the summer, only about 4 fb-1 was delivered to each experiment. This added up to the magnet and calorimeters problems in the CMS experiment who managed to collect only 2.4 fb-1 of useful data, against 3.6 fb-1 in ATLAS. With that amount, the discriminating power is improved with respect to run-1 only for particles heavier than ~1 TeV. As a consequence, the boundaries of our knowledge have changed only slightly compared to the status a year ago. At the end of the day, it's this year, and not the previous one, that is going to be decisive for the field.
So the tension is similar as last year at this time, however the mood is considerably better, see the plot. We have two intriguing hints of new physics that have a non-negligible chance to develop into a strong evidence: one is the B-meson anomalies discussed several times in this blog, and the other is the 750 GeV diphoton excess. Especially the latter stirs theorists' imagination, even if some experimentalists deplore the fact (theorists writing papers inspired by experimental data? oh horror...). A significant deviation from the Standard Model seen independently by 2 different collaborations in an experimentally clean channel happens for the first time in my life. In my private poll, the chances for the B-meson anomalies to be new physics are estimated as 1%, while for the diphoton the chances are 10%. This adds up to a whopping 11% chance, the biggest ever, of finding new physics soon. Moreover, if the diphoton excess is really a new particle, we are basically guaranteed to find other phenomena beyond the Standard Model. Indeed, most models accommodating the 750 GeV excess require new colored states with O(1) TeV mass, which are then most naturally embedded in a theory with new strong interactions at a few TeV scale. Not only would that give a boost to future LHC analyses, but it would also motivate building a higher-energy collider, e.g. a 30 TeV collider that could be constructed at a short time scale at CERN.
Anything may happen this year, for good or for worse. Cross your fingers and fasten your seat belts.
So the tension is similar as last year at this time, however the mood is considerably better, see the plot. We have two intriguing hints of new physics that have a non-negligible chance to develop into a strong evidence: one is the B-meson anomalies discussed several times in this blog, and the other is the 750 GeV diphoton excess. Especially the latter stirs theorists' imagination, even if some experimentalists deplore the fact (theorists writing papers inspired by experimental data? oh horror...). A significant deviation from the Standard Model seen independently by 2 different collaborations in an experimentally clean channel happens for the first time in my life. In my private poll, the chances for the B-meson anomalies to be new physics are estimated as 1%, while for the diphoton the chances are 10%. This adds up to a whopping 11% chance, the biggest ever, of finding new physics soon. Moreover, if the diphoton excess is really a new particle, we are basically guaranteed to find other phenomena beyond the Standard Model. Indeed, most models accommodating the 750 GeV excess require new colored states with O(1) TeV mass, which are then most naturally embedded in a theory with new strong interactions at a few TeV scale. Not only would that give a boost to future LHC analyses, but it would also motivate building a higher-energy collider, e.g. a 30 TeV collider that could be constructed at a short time scale at CERN.
Anything may happen this year, for good or for worse. Cross your fingers and fasten your seat belts.
Subscribe to:
Posts (Atom)