Friday 27 May 2016

CMS: Higgs to mu tau is going away

One interesting anomaly in the LHC run-1 was a hint of Higgs boson decays to a muon and a tau lepton. Such process is forbidden in the Standard Model by the conservation of  muon and tau lepton numbers. Neutrino masses violate individual lepton numbers, but their effect is far too small to affect the Higgs decays in practice. On the other hand, new particles do not have to respect global symmetries of the Standard Model, and they could induce lepton flavor violating Higgs decays at an observable level. Surprisingly, CMS found a small excess in the Higgs to tau mu search in their 8 TeV data, with the measured branching fraction Br(h→τμ)=(0.84±0.37)%.  The analogous measurement in ATLAS is 1 sigma above the background-only hypothesis, Br(h→τμ)=(0.53±0.51)%. Together this merely corresponds to a 2.5 sigma excess, so it's not too exciting in itself. However, taken together with the B-meson anomalies in LHCb, it has raised hopes for lepton flavor violating new physics just around the corner.  For this reason, the CMS excess inspired a few dozen of theory papers, with Z' bosons, leptoquarks, and additional Higgs doublets pointed out as possible culprits.

Alas, the wind is changing. CMS made a search for h→τμ in their small stash of 13 TeV data collected in 2015. This time they were hit by a negative background fluctuation, and they found Br(h→τμ)=(-0.76±0.81)%. The accuracy of the new measurement is worse than that in run-1, but nevertheless it lowers the combined significance of the excess below 2 sigma. Statistically speaking, the situation hasn't changed much,  but psychologically this is very discouraging. A true signal is expected to grow when more data is added, and when it's the other way around it's usually a sign that we are dealing with a statistical fluctuation...

So, if you have a cool model explaining the h→τμ  excess be sure to post it on arXiv before more run-2 data is analyzed ;)

19 comments:

Alex Small said...

Recently the world has taken a keen interest in the nuclear physics of lithium as possible evidence for a new 17-18 MeV boson. Any comments?

http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.116.211303
http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.116.042501
http://www.nature.com/news/has-a-hungarian-physics-lab-found-a-fifth-force-of-nature-1.19957

The two investigations seem to be focused on very different things but both wind up invoking bosons of the same mass as an explanation.

mfb said...

> So, if you have a cool model explaining the h→τμ excess be sure to post it on arXiv before more run-2 data is analyzed ;)

Isn't that true for nearly every excess? They nearly all tend to go away.

Unknown said...

Alex Small: I hate how science articles still talk about an "unknown FIFTH force". Like 2012 never happened.

But they said it doesn't conflict with any past experiments, so does that mean they're saying it could actually have been missed by colliders of the past? If so, why?

mfb said...

> Like 2012 never happened.
Or the electroweak unification?

I'm sceptical about the statement about past experiments. It's something that should appear in e+ or e- fixed target experiments (directly or via e- -> e+ e- e- in matter, respectively)

Luboš Motl said...

"Br(h→τμ)=(-0.76±0.81)%"

Given the fact that the negative branching ratio is mathematically impossible, I surely think that this is one of the situations in which an asymmetric indication of the error margin would be highly appropriate.

More generally, I believe that your title is a distortion. This new dataset wasn't expected to produce a signal beating the noise, and it didn't. So there's no evidence to back "the signal is real" and no evidence to back "the signal is going away".

mfb said...

> So there's no evidence to back "the signal is real" and no evidence to back "the signal is going away".

The new measurement makes low branching ratios (including 0) more likely. That is some evidence that the "signal" was just a fluctuation. And if we take a Bayesian view, it therefore also increases the probability that future measurements will be consistent with zero.

It would be possible to limit the likelihood function to positive branching fractions, but not limiting it in the quoted measurement result makes combinations easier.

Anonymous said...

I know you are joking, but the last statement is precisely what makes HEP pheno the epistemological hellhole it has become in the latter years.

Anonymous said...

Lubos,

The title is not a distortion. Whether the dataset on its own can produce a signal beating the noise is irrelevant. It adds to the old dataset and improves the overall signal-to-noise ratio nonetheless. The resulting change is the key issue of this article.

I think your time is better spent on issues like what new names the Czech Republic should take on.

Luboš Motl said...

Anonymous, but the change of the statistical significance resulting from this very small extension of the dataset was predicted to be random - with both signs equally likely - whether or not the new physics effect exists. And one sign was picked by the random generator of Nature that decides about collision. To deduce "lessons" from that is as stupid as to derive physical theories by tossing a coin and translating the heads-and-tails to bits and bytes and texts.

I didn't understand your sentence about my country and I don't believe that it makes sense to sacrifice more than 5 seconds in attempts to decode the cryptic message.

Rastus Odinga Odinga said...

Don't cry Lubos, remember you still have 384 and a half *other* 2-sigma "discoveries" on your blog.

Luboš Motl said...

Unlike the creatures of your type, I am interested in physics (an empirical science) so I do care about experimental excesses, but I have never used the term "discovery" for 2-sigma excesses, another reason to be sure that you're just an obnoxious stupid third-world troll.

Jester said...

Boys, please :)
Further comments here should be on topic, while intellectual capabilities of commenters must be discussed outside.

RBS said...

If more data is "irrelevant" then why are we doing very little else than waiting for more data? Or is there a threshold where "irrelevant data" becomes good data, short of 6 sigma certainty?

mfb said...

Back of the envelope estimate: 0.82^2/0.37^2 = 4.9, so CMS added something equivalent to 20% more data. That does not change the situation much, but it is more than tossing a coin.

Anonymous said...

If all upward and downward changes in significance were equally likely in a small amount of data, the expected change in significance would be zero. On the other hand, for a large amount of data, the expected change in significance is positive under the signal hypothesis.

We now have a contradiction because the expectation is linear. i.e.

E[change in significance from lots of data] = E[sum of changes by breaking data into small equal chunks] = sum E[a single chunk] = number chunks * E[a single chunk]

The LHS isn't zero and thus neither is the expected change of significance from a small amount of data, E[a single chunk]. Under the signal hypothesis we expect the significance to grow (albeit perhaps by a small amount) in any amount of data.

RBS said...

Anonymous, technically all chunks aren't equal though so sum E(n) = sum E(n-1) + E(1) wouldn't necessarily mean that E(n) > E(n-1). But it should be relatively straightforward to prove that under the signal hypothesis E(n) > E(n-1) is more probable than the opposite.

Anonymous said...

@RBS I agree, it's just a sketch of an argument that does enough to kill the idea that in small amounts of data the expected change is zero (because the significance is a random walk). It shows that in a random walk in one dimension y, if in the long run E(y in many steps) > 0, then E(y in a single step) >0.

Actually, the simplest fix to the fact that the changes in z-score aren't independent (the change in numbers of sigma from new data depends on the current sigma) is to consider changes in chi-squared or loglike, as it will be linear. So repeat the above for changes of chi-squared. You'll conclude E[change in chi-squared in a chunk] > 0 from which E[change in significance in a chunk] > 0 follows.

RBS said...

Hubble - new results on the rate of acceleration

http://www.bbc.com/news/science-environment-36441219

Anonymous said...

Please don't engage with Lubos Motl:
https://www.facebook.com/ericweinstein/posts/10208132882377229?pnref=story