Today at the conference Rencontres de Blois the CDF collaboration presented an update on the invariant mass of 2 jets produced in association with a W boson. Recall that 2 months ago CDF posted a paper based of 4.3 fb-1 of data claiming that this observable displays an unexpected bump near 150 GeV with a significance of 3.2 sigma. The bump could have been a fluke, an accounted for systematic effect or surprising new physics. Now the first option is no longer on the table: the same bump is also present in the more recent data with a large statistical significance. With 7.3fb-1 of the Tevatron data, after subtracting the known Standard Model backgrounds other than the WW and WZ production, the distribution of the jet pair invariant mass looks like this:The peak has become more pronounced! CDF quotes the significance of 4.1 sigma (the number 4.8 sigma I quoted earlier takes into account only statistical uncertainties; after including systematic uncertainties the significance drops to 4.1 sigma). In a collider experiment, such a huge departure from a Standard Model prediction is happening for the first time in the human history :-) I don't have to stress how exciting it is. However we're not celebrating the demise of the Standard Model yet, not before an independent confirmation DZero or from the LHC. In any case, this summer is going to be hot.
For possible theoretical explanations of the bump, see my previous badly timed post. In the Blois slides CDF adds one important new piece of information. They say the bump cannot be due to the Standard Model top quark background, contrary to what was suggested in a couple of theory paper. Basically, there is no sign of enhanced b-jet content in the excess events, and in any case the top quark endpoint would show up below 150 GeV due to different jet energy scale corrections for b-jets.
Update: CDF has released more plots and the note describing the update.
I know the cautious reaction is to assume that this reflects a failure to correctly simulate the Standard Model background, but it's hard not to get a little excited looking at such an unambiguous signal. Thanks for breaking the news (to the amateur audience).
ReplyDeleteIn the last two months theorists who calculate Standard Model backgrounds have been busy evaluating the reliability of the predictions. See eg:
ReplyDeletehttp://arxiv.org/abs/1105.4594v1
It is unlikely the excess is due to a incorrectly modelled background.
Sigh. Once again, as I said, we need to understand the color structure of the Z boson (as a quantum Fourier braid) in BSM QCD.
ReplyDeleteWhat's a quantum Fourier braid?
ReplyDeleteViviana Cavaliere's thesis previously showed the excess appeared only in the muon Mjj channel. Is this still the case?
ReplyDeleteIgnoramus, this refers to a non local type of supersymmetry (nothing to do with stringy physics) that performs a Fourier transform on the Bilson-Thompson braid fermions, to obtain the bosons. The Z boson appears as a triplet, corresponding to a Weyl diagonal up to the three cubed roots of unity. On the preliminary algebraic analysis done so far, this should be a color triplet, although remember that we are no longer refering to the local theory, and therefore not representation theory.
ReplyDeleteFor TGD based interpretation 150 GeV bump see this.
ReplyDeleteWhy was this presented in a talk with title "Searches for the Higgs Boson"?
ReplyDeleteWjj is a background for Higgs searches ;-)
ReplyDeleteAnon -5, in the previous data set the the excess was present in both the electron and muon channels, and it was actually larger in electrons, something like 3 vs. 2 sigma. I don't know what's the situation after the update.
ReplyDeleteSeveral CDF friends of mine don't give a sh#t about the analysis behind this peak.
ReplyDeleteSure, it's a background for Higgs searches, but certainly its importance goes far beyond that. That's what I don't understand about this particular way of talking about this excess.
ReplyDeleteany one from D0 or Cern has further comments? I think the people there should have looked into this several months ago..
ReplyDeleteIf the problem is in the Monte Carlo simulations for the W backgrounds, wouldn't this effect a number of other results? What are they?
ReplyDeleteIt's very immature to say the following (in quotes) "In a collider experiment, such a huge departure from a Standard Model prediction is happening for the first time in the human history :-) I don't have to stress how exciting it is. However we're not celebrating the demise of the Standard Model yet, ...."
ReplyDeleteWhat's the evidence that this is actually a departure from Standard Model rather than being a new thing. This could very well be a new particle/resonance yet it is not clear it's a departure from Standard Model. It could be that Standard Model can not explain this. It could also be that Standard Model can explain this. It is not a demise of the standard model. If it's a new discovery given to explanations of Standard Model then it will be included in Monte Carlo for future studies say in as soon as the next month. And it will certainly be included in the upcoming PDG edition. If it is truly a deviation from Standard Model then it's a hint of new Physics beyond standard model and reasons for celebration not a demise of Standard Model. We should be as much glad by the discovery of a new particle as we must be by the discovery of Physics beyond our current understanding. It will open a whole new vista for theory and experiment (and phenomenologists will get free salary) The word is REJOICE not demise.
Prize goes to Manmohan for most incoherent comment...
ReplyDeleteAlso a special mention to Kea for the usual arrogant "Sigh", followed by crazy babbling.
Yeah Kea's "Once again, as I said, we need to understand the color structure of the Z boson (as a quantum Fourier braid) in BSM QCD." does sound a lot like something from Alan Sokal's hoax.
ReplyDeleteAnon-5, the D0 analysis will be out within 10 days. For the LHC ones I guess we need to wait until the summer conferences, unless somebody finds the peak in which case a rumor will be out sooner.
ReplyDeleteAnon-4, I don't know much about the background simulations, but people keep thinking about it, see e.g. the reference Walter pasted above.
Given the short time D0 had to look at this, I hope they can show the results with the identical cuts CDF used.
ReplyDeleteIf they were only able to do the standard Higgs search analysis using a transverse energy cut for the jets of 20 GeV and 4 fb^-1 of data, they cannot see the bump as the QCD background will overwhelm the bump.
I think we can all agree that a Z' is ruled out by the Jester Exclusion Principle... (http://resonaances.blogspot.com/2011/01/no-bosons-for-americans.html)
ReplyDeleteManmohan, if it's a new particle not in the Standard Model, then it is ipso facto a departure from the Standard Model. That's no slight against the Standard Model; it's still extremely accurate **within its domain of applicability**. But even before this (possible) discovery, I don't think anybody really expected that domain to extend to all energies up to the Planck scale.
ReplyDelete"However we're not celebrating the demise of the Standard Model yet..."
ReplyDeleteSeems like some others are already celebrating it for other reasons. Any comment on that? (if it isn't too boring for you..).
i am no experimenter and don't know the details of this particulat channel too well. but i do know a thing or two about data analysis and looking at the histogram (the real one, not the background subtracted one that is featured here) it looks a bit unconvincing. the "bump" at 160GeV is a slight excess at the falling side of a peak. it does not seem to be the only one: around 210GeV this feature repeats. and quite generally, data is above prediction on the right side. on the left side it is generally below, too.
ReplyDeletefor me this looks most like a slight offset in energy scale. of course these people know what they are doing, but the experiment is old and maybe there is some slight miscallibration somewhere that shifts the whole distribution. i'd really love to see the statistical significance when anenergy fudge factor is added to the fit model.
Chris, CDF tried the offset in the jet energy scale (by 7%) after the first publication. See for example slide 39 of the talk at Cern , the shift does not affect the significance.
ReplyDeleteAlso note that the bump is fitted with a gaussian. The normalization of this gaussian lineshape and the backgound lineshape are free parameters in the fit. If you would put in a model like for example technicolor you would see a much better fit because the bump has no longer a gaussian lineshape.
Finally the region below 80 GeV is somewhat sensitive to the background modelling.
The CDF website has much more information about these issues.
Re. the paper by Lunghi and Soni: much of the discussion relies on a very optimistic view of the systematic uncertainties in SM computations of hadronic matrix elements. While some lattice QCD collaborations tend to provide bold statements on the precision of their results, most practitioners will agree that there is still quite some way to go --- especially for B-physics amplitudes...
ReplyDelete@Walter: what is the point of confirming with an NLO calculation the backgrounds in this measurement, if we know that the jet algorithm used for the analysis is not infrared safe? We know that NNLO the perturbative calculations give infinities for such an analysis with a cone algorithm.
ReplyDelete@CDF/D0 The non-perturbative corrections (which must be large), are not factorizable in this case, and they must have some shape dependence. What is more, they can be process specific. Only with factorizable non-perturbative effects one can be transported from other measurements.
Such a pity that such beautiful measurements are spoiled by an idiotic choice of jet algorithms and
the stubbornness of the collaborations to change to better theory.
If we do not see an analysis with a different jet-algorithm, we should not relax that non-perturbative corrections are under control in this discovery....
@anonymous: you don't have to go to nnlo. The nlo prediction for W+2 jets cannot use the iterative cone algorithm used in this analysis. The NLO prediction uses a Kt-algorithm. Note however that data (and showered LO predictions run through a detector simulation) do not depend a lot on using CDF's algorithm or one of the Kt-algorithm for jets with transverse energy over 40 GeV.
ReplyDeleteSo, from a first principle I fully agree with you. From a more practical point of view it will not make any difference for the predictions in the bump region. But if you think otherwise I would suggest you fire up the Monte carlo's and show that the excess is due to the non-infrared safe jet algoritm.
@Anon
ReplyDeleteI thought that all algorithms used for Run II analysis by both D0 and CDF were infrared and collinear safe:
http://arxiv.org/pdf/hep-ex/0005012
Do you have a reference for that problem you are mentioning?
For most Run II analyses, including this one, CDF has continued to use the JetClu cone algorithm with R = 0.4. But, like Walter, I find the suggestion that this cone algorithm leads to a narrow feature at 150 GeV to be unlikely...
ReplyDeleteWalter,
ReplyDeletethe fixed energy offset (7%) is a good exercise already. still, as a totally data driven type, i would love to see the energy calibration as a fit parameter.
An excellent reference for jet algorithms is:
ReplyDeletehttp://arXiv.org/pdf/0906.1833
Note Figure 5, where it is demonstrated that all algorithms used by Tevatron in non QCD dedicated studies are infrared unsafe. Is this an issue?
Firing up a Monte-Carlo is not a way to find out. This is exactly the point: theory (Monte-Carlo's) is rather useless in the case of an observable where the infrared safety test is not passed. Which data shows that the difference is experimentally small between a safe and an infrared unsafe algorithm? And under what cuts? The analysis shown requires rather strict restrictions on jets, eg by asking only for two jets and vetoing a third jet. I don't know of any reasonable theory or even recipe that can give a trustworthy prediction on jet-vetos for an infrared unsafe algorithm. Btw, this is a very tricky issue also in the case of infrared safe algorithms, with Monte-Carlo's such as Pythia and Herwig estimating quite differently jet-veto efficiencies.
It may as well be that we are lucky and infrared un-safety is not an issue. On the other hand, I do not know of any serious published paper which tells us not to worry. The best would be if the Wjj analysis was also done with a k_t, antiK_t, Aachen, SIScone, or name it, infrared safe algorithm.
@Chris: Yes it would be interesting to see if there is an energy scale at which the data would fit the measured distribution well. This would be rather computer intensive. Each jet energy scale choice would require a re-analysis of the data. Previously rejected events (due to the cuts) would be now accepted and vise versa. Also, for large deviations from the standard jet energy scale other observables in the W+2 jet would start to disagree with the background (such as eg the transverse energy of the jets).
ReplyDelete@anonymous: I am sympathetic to your worries about jet vetoes. However, CDF showed that when doing the W+2jet inclusive the bump remains. This is convincing enough for me.
ReplyDeleteI am also agreeing with you that NLO has to be used with care given the jet algorithm used. This adds some uncertainty to the K-factor. But not to the extend to argue onecan multiply e.g. the ttbar background by a factor of 2 or more.
I disagree that a Monte Carlo study cannot be used. A LO monte carlo + matched shower should show differences between the IR-safe and non IR-safe jet algorithms. Especially when varying the IR-sensitive parameters within e.g. Pythia.
If you google "CDF kt algorithm" (or D0) you'll see both collaborations spent a lot of time understanding kt-algorithms. Somewhere in there is hiding the ratio of cone-jet and Kt-jet as a function of Et of the jet... would take some time to find it...
However, if you look at a ATLAS jet of 1 TeV is is clear any jet algorithm will find the same jet. Non-perturbative effect become less relevant as Et-jet increases. So, it is just a question where these effects are becoming irrelevant given the uncertainty in the measurement....
hmm... so how much would the significance change with an infrared safe jet algorithm? half, one, two sigma if the maximum deviations were allowed according to the k_t-cone translations?
ReplyDeleteI believe that a parton shower monte-Carlo will show differences between IR safe and IR unsafe algorithms
(it must show differences also among infrared safe algorithms since they correspond to different experimental observables). I do not believe that these differences teach us anything. To be allowed to use the Sudakov exponentiation formula in parton showers, i.e. the factorization of probabilities for successive emissions, we must first guarantee that we have cancelled infrared divergences (long range phenomena). This does not happen with IR unsafe algorithms. One does get an answer, but the answer does not make any sense and breaks factorization in a completely uncontrolled manner. Perhaps, the differences are small for a certain tuning of the Monte-Carlo (i.e. the vittuality or pt where the shower stops and starts hadronization). But, these differences must depend on this scale which we can choose with a big freedom from the theory side. The hadronization model cannot and should not be used to compensate for the infrared unsafety of our observable at the parton level.
What is the problem to corroborate such an important discovery with observables for which we can make solid theory predictions rather than pretending that we can make solid theory predictions?
I do not think you'll find many "standard-model theorist" ready to declare this a new discovery. More likely, is has it's origin in the standard model. I think trying to dismiss this due to some perceived experimental issue not the way to go. Experimenters are well aware of these issues and elaborate studies are made to make sure it would not affect their conclusions.
ReplyDeleteJet algorithms will always have non-perturbative issues. After all, underlying event ( which is correlated with the hard scattering) has to be subtracted, out of cone corrections are applied etc...