- 1) that with just 70 nb-1 of LHC data one can obtain non-trivial constraints on vanilla susy models that in some cases are more stringent than the existing Tevatron constraints,
- 2) and that the susy combat group in ATLAS missed the point 1.
There is a lot of jet events at the LHC, but fortunately only a small fraction of them is accompanied by large missing energy. In the 70nb-1 of data, after requiring 40 GeV of missing pT, and with some additional cuts on the jets one finds only four such dijet events, zero 3-jet events, and one 4-jet event. Thus, even a small number of gluinos would have stood out in this sample. The resulting constraints on the gluino vs. neutralino masses are plotted below (the solid black line)
In the region where the mass difference between the gluino and the neutralino is not too large, the LHC constraints beat those from the Tevatron, even though the latter are based on 100000 more luminosity! Obviously, the constraints will get much better soon, as the LHC has already collected almost 10 times more luminosity and doubles the data sample every week.
These interesting constraints were not derived in the original experimental note from ATLAS. Paradoxically, many experimentalists are not enthusiastic about the idea of interpreting the results of collider searches in terms of directly observable parameters such as masses and cross sections. Instead, they prefer dealing with abstract parameters of poorly motivated theoretical constructions such as mSUGRA. In mSUGRA one makes a guess about the masses of supersymmetric particle at the scale $10^{14}$ times higher then the scale at which the experiment is performed, and from that input one computes the masses at low energies. The particular mSUGRA assumptions imply a large mass difference between the gluino and the lightest neutralino at the weak scale. In this narrow strip of parameter space the existing Tevatron searches happen to be more sensitive for the time being.
Well, the gluino mass is probably above 400 GeV.
ReplyDeleteI don't find anything paradoxical about the experimenters' desire to interpret their data in as theorist-friendly way as possible.
After all, the theorists are the main consumers of their experimental work, so this is just a way to make things consumer-friendly.
When an experiment is relevant for a theory XY, of course that they will try to parameterize their results in terms of parameters of the theory XY because the paper is really written for XY theorists.