It shows the constraints on the gluino and the lightest neutralino masses in the pMSSM. Usually, the most transparent way to present experimental limits on supersymmetry is by using

*simplified models.*This consists in picking two or more particles out of the MSSM zoo, and assuming that they are the only ones playing role in the analyzed process. For example, a popular simplified model has a gluino and a stable neutralino interacting via an effective quark-gluino-antiquark-neutralino coupling. In this model, gluino pairs are produced at the LHC through their couplings to ordinary gluons, and then each promptly decays to 2 quarks and a neutralino via the effective couplings. This shows up in a detector as 4 or more jets and the missing energy carried off by the neutralinos. Within this simplified model, one can thus interpret the LHC multi-jets + missing energy data as constraints on 2 parameters: the gluino mass and the lightest neutralino mass. One result of this analysis is that, for a massless neutralino, the gluino mass is constrained to be bigger than about 1.4 TeV, see the white line in the plot.

A non-trivial question is what happens to these limits if one starts to fiddle with the remaining one hundred parameters of the MSSM. ATLAS tackles this question in the framework of the pMSSM, which is a version of the MSSM where all flavor and CP violating parameters are set to zero. In the resulting 19-dimensional parameter space, ATLAS picks a large number of points that reproduce the correct Higgs mass and are consistent with various precision measurements. Then they check what fraction of the points with a given m_gluino and m_neutralino survives the constraints from all ATLAS supersymmetry searches so far. Of course, the results will depend on how the parameter space is sampled, but nevertheless we can get a feeling of how robust are the limits obtained in simplified models. It is interesting that the gluino mass limits turn out to be quite robust. From the plot one can see that, for a light neutralino, it is difficult to live with m_gluino < 1.4 TeV, and that there's no surviving points with m_gluino < 1.1 TeV. Similar conclusion are not true for all simplified models, e.g., the limits on squark masses in simplified models can be very much relaxed by going to the larger parameter space of the pMSSM. Another thing worth noticing is that the blind spot near the m_gluino=m_neutralino diagonal is not really there: it is covered by ATLAS monojet searches.

The LHC run-2 is going slow, so we still have some time to play with the run-1 data. See the ATLAS paper for many more plots. New stronger limits on supersymmetry are not expected before next summer.

## 8 comments:

Hi Jester.

I check Op-Vistars almost every day. Noticed that there were a lot of problems. Even a lightning strike.

Did Run 1 also have these start up problems?

Regards,

Michel

Michael:

Yes and no.

Yes* The Run1 start up (2008) had a catastrophic incident which delayed the start for much longer than delays of this year.

* Also, the machine protection committee were very cautious during the intensity ramp-up. It was only in 2012 that serious (exponential: as much data was collected in a day, as had been collected all year before) luminosity was recorded on a daily basis.

NoRun2 started with some hindering problems in place (though some were only discovered during commissioning):

* The ULO sitting at 15R8. (slide 27). This problem seems to have been mitigated (by steering around it), but it may crop up again as larger bunch trains are used.

* In 2013, the TDIs were discovered to not be manufactured to spec. (slide 6) This limits the total number of bunches down from ~ 2808 to 2500 or so. Not a problem yet.

* Scrubbing with "doublet" beams was more challenging than anticipated. (slide 31) This means that it has been necessary to scrub for slightly longer with 25ns beams, and that the overall result still isn't as helpful.

* Some ~ 1200 electronics boards installed during LS1 to monitor quench protection equipment have proven to not be as radiation hard as expected. This has been the source of many beam dumps that otherwise would not have happened. Luckily, they are all being replaced

this weekduring the technical stop.* There were a few circuits with ground fault problems that had to be fixed. This removed almost two weeks from the physics budget of the year.

Remember that various equipment is aging (cryo, electronics, communications, access) will have a higher probability of failing as time goes on.

In short, I think Run2 is going about as well as could be expected. It's likely that data production this year will be a few factors of two below the original expectation.

Cheers,

-Drew

Thanks a lot Drew. Michel, we had very high expectations after the glorious year 2012 where everything was running smoothly and the LHC collected more data than expected. In this respect, the beginning of run-2 is a slight disappointment. But all the problems so far are small hiccups that will not matter in the long run.

Drew and Jester. TY for this insight. For me it is nothing less than intoxicating, that a layman like me (i drive trains for a living), can be informed like this.

Without the hangover!

Michel

The performance is significantly below expectations. A few months ago the predictions were in the range of 10-12/fb, then they got lowered in multiple steps, the most recent prediction is 3/fb - a factor 4 lower.

In terms of physics reach, see this older blog post: http://resonaances.blogspot.de/2015/05/how-long-until-its-interesting.html

The Higgs analyses might be able to make small improvements by combining old and new data, but there won't be new things. The 13 TeV production cross-section will certainly get some measurement, but with large uncertainties (no point in counting sigmas - we know it exists and it won't disappear with higher energy).

SUSY: Some analyses will be able to improve the limits significantly, some won't.

Extra dimensions: even the data collected so far (0.2/fb) is sufficient to improve the limits significantly. Or discover something...

Anyway, the problems seen this year all seem solvable - the 2016 data should easily beat the run 1 data everywhere.

Jester,

What is your take on the 3 TeV di-electron event at CMS? Is this a glimpse of the Z' resonance or is it way too early to hope?

Cheers,

Ervin

Way too early. It takes more than 1 event to make a resonance :)

For reference: https://cds.cern.ch/record/2048626/

ATLAS has 200/pb and CMS has more than 65/pb as well (I guess they can even use their magnet-off data as the calorimeters don't care much). If that is not a fluctuation, more events should come soon.

Post a comment