Saturday 12 September 2015

What can we learn from LHC Higgs combination

Recently, ATLAS and CMS released the first combination of their Higgs results. Of course, one should not expect any big news here: combination of two datasets that agree very well with the Standard Model predictions has to agree very well with the Standard Model predictions...  However, it is interesting to ask what the new results change at the quantitative level concerning our constraints on Higgs boson couplings to matter.

First, experiments quote the overall signal strength μ, which measures how many Higgs events were detected at the LHC in all possible production and decay channels compared to the expectations in the Standard Model. The latter, by definition, is μ=1.  Now, if you had been impatient to wait for the official combination, you could have made a naive one using the previous ATLAS (μ=1.18±0.14) and CMS (μ=1±0.14) results. Assuming the errors are Gaussian and uncorrelated, one would obtains this way the combined μ=1.09±0.10. Instead, the true number is (drum roll)
So, the official and naive numbers are practically the same.  This result puts important constraints on certain models of new physics. One important corollary is that the Higgs boson branching fraction to invisible (or any undetected exotic) decays is limited as  Br(h → invisible) ≤ 13% at 95% confidence level, assuming the Higgs production is not affected by new physics.

From the fact that, for the overall signal strength, the naive and official combinations coincide one should not conclude that the work ATLAS and CMS has done together is useless. As one can see above, the statistical and systematic errors are comparable for that measurement, therefore a naive combination is not guaranteed to work. It happens in this particular case that the multiple nuisance parameters considered in the analysis pull essentially in random directions. But it could well have been different. Indeed, the more one enters into details, the more the impact of the official combination becomes relevant.  For the signal strength measured in particular final states of the Higgs decay the differences are more pronounced:
One can see that the naive combination somewhat underestimates the errors. Moreover, for the WW final state the central value is shifted by half a sigma (this is mainly because, in this channel, the individual ATLAS and CMS measurements that go into the combination seem to be different than the previously published ones). The difference is even more clearly visible for 2-dimensional fits, where the Higgs production cross section via the gluon fusion (ggf) and vector boson fusion (vbf) are treated as free parameters. This plot compares the regions preferred at 68% confidence level by the official and naive combinations:
There is a significant shift of the WW and also of the ττ ellipse. All in all, the LHC Higgs combination brings no revolution, but it allows one to obtain more precise and more reliable constraints on some new physics models.  The more detailed information is released, the more useful the combined results become.

9 comments:

Anonymous said...

Why call it "official" instead of "full" or "complete"?

mfb said...

It has been done by ATLAS and CMS, not by someone else. That's the "official" part.
"Full"/"complete"? Well, it does not include all channels ever studied.

Anonymous said...

Jester,
From the table you gave with the branching ratios, it seems that the "combined Atlas and CMS data" find more decays to ZZ, WW, gammagamma, and tautau than a 125.2 GeV Higgs Boson should decay into. Whereas they find less decays into bb.
I read the presentation that you linked to, and it seems like the researchers are using gamma,gamma and ZZ to determine the mass of the Higgs Boson.
This seems a little strange because the branching ratio to gammagamma is not a strong function of the mass of the Higgs Boson at this range of masses. It would seem to make more sense to use more than two branching ratios to determine the mass. For example, I would have used all data available to determine the mass of the Higgs Boson, and then determined the relative error of each branching ratio after using all data sets to determine the mass.
The analysis could have at least used two branching ratios that go in opposite direction (such as ZZ and bb.) Is there anybody out there that is doing it this way? (i.e. using all data available from relevant experiments to fit the mass of the Higgs Boson, and then determining the error in each branching ratio after using all data available to determine the mass?)

My point here is that it appears that the overall error can be significantly reduced by lowering the estimated mass of the Higgs Boson to ~124 GeV. This will increase the bb and tautau branching ratio and lower the ZZ & WW branching ratio. The overall error will decrease, and this also seems to fit better with the Tevatron results in the same presentation.

Can you explain a little about why the analysis was done using gammagamma and ZZ to fit the mass of the Higgs Boson?
Thank you

Anonymous said...

Opps...
I meant to say "the overall error can be reduced by increasing the mass of the Higgs Boson to ~126 GeV" at the end of my comment above.
Increasing the estimate mass would bring 4 of the 5 branching ratios closer to a value of 1. Only the branching ratio to tautau would be made worse.
This shift would also seem to help bring the mumu branching ratio closer to its expected value.

Either way, my question remains: are their analyses out there in which all data is used to constrain the mass of the Higgs Boson?
(i.e. in much the same way that the Planck group used data from Planck+BAO+SDSS+Plancklensing+Ho_prior to estimate the parameters in the lambdaCDM cosmology model or in which the neutrino oscillation groups combine their data to constrain the parameters in the PMNS matrix.)

There seems to be a lot of potential data sets (beyond just LHC) to constrain the mass of the Higgs Boson in the standard model of particle physics.

Jester said...

The Higgs boson mass is not determined by measuring the branching ratios. Instead, it is obtained by looking for a bump in the invariant mass spectrum of its decays product. The Higgs mass resolution in the gamma-gamma and ZZ>4 lepton channels is much better, by at least an order of magnitude, than that in the other channels. For this reason, combining other channels together with gamma-gamma and ZZ>4 lepton would give exactly the same result for the Higgs mass.

J Gregory Moxness said...
This comment has been removed by the author.
Alex Small said...

Recently 1506.08614 is getting some attention. Any comments? It's only a 2.1 sigma discrepancy, but apparently consistent with a few other discrepancies.

Jester said...

I'll try to write something about it soon. In short, this one is less exciting from the theoretical point of view because concrete models require a very convoluted choice of parameters in order to fit the anomaly.

Anonymous said...

Call me a klutz, but I couldn't locate where it says BR(h->inv) < 13% in the paper. Can you please point out the relevant page? I'm curious to see how they obtained this limit. It's not obvious to me that the constraint gets better than the individual limits set by ATLAS (23%) and CMS (58%, I think).