Monday, 29 June 2015

Sit down and relaxion

New ideas are rare in particle physics these days. Solutions to the naturalness problem of the Higgs mass are true collector's items. For these reasons, the new mechanism addressing the naturalness problem via cosmological relaxation have stirred a lot of interest in the community. There's already an article explaining the idea in popular terms. Below, I will give you a more technical introduction.

In the Standard Model, the W and Z bosons and fermions get their masses via the Brout-Englert-Higgs mechanism. To this end, the Lagrangian contains  a scalar field H with a negative mass squared  V = - m^2 |H|^2. We know that the value of the parameter m is around 90 GeV - the Higgs boson mass divided by the square root of 2. In quantum field theory,  the mass of a scalar particle is expected to be near the cut-off scale M of the theory, unless there's a symmetry protecting it from quantum corrections.  On the other hand, m< without any reason or symmetry principle consitutes the naturalness problem.   Therefore, the dominant paradigm has been that, around the energy scale of 100 GeV, the Standard Model must be replaced by a new theory in which the parameter m is protected from quantum corrections.  We know several mechanisms that could potentially protect the Higgs mass: supersymmetry, Higgs compositeness, the Goldstone mechanism, extra-dimensional gauge symmetry, and conformal symmetry. However, according to experimentalists, none seems to be realized at the weak scale; therefore, we need to accept that nature is fine-tuned (e.g. susy is just behind the corner), or to seek solace in religion (e.g. anthropics).  Or to find a new solution to the naturalness problem: one that is not fine-tuned and is consistent with experimental data.

Relaxation is a genuinely new solution, even if somewhat contrived. It is based on the following ingredients:

  1.  The Higgs mass term in the potential is V = M^2 |H|^2. That is to say,  the magnitude of the mass term is close to the cut-off of the theory, as suggested by the naturalness arguments. 
  2. The Higgs field is coupled to a new scalar field - the relaxion - whose vacuum expectation value is time-dependent in the early universe, effectively changing the Higgs mass squared during its evolution.
  3. When the mass squared turns negative and electroweak symmetry is broken, a back-reaction mechanism should prevent further time evolution of the relaxion, so that the Higgs mass terms is frozen at a seemingly unnatural value.       

These 3 ingredients can be realized in a toy model where the Standard Model is coupled to the QCD axion. The crucial interactions are  
Then the story goes as follows. The axion Φ starts at a small value where the M^2 term dominates and there's no electroweak symmetry breaking. During inflation its value slowly increases. Once g*Φ > M^2, electroweak symmetry breaking is triggered and the Higgs field acquires a vacuum expectation value.  The crucial point is that the height of the axion potential Λ depends on the light quark masses which in turn depend on the Higgs expectation value v. As the relaxion evolves, v increases, and Λ also increases proportionally, which provides the desired back-reaction. At some point, the slope of the axion potential is neutralized by the rising Λ, and the Higgs expectation value freezes in. The question is now quantitative: is it possible to arrange the freeze-in to happen at v< ? It turns out the answer is yes, at the cost of choosing strange (though not technically unnatural) theory parameters.  In particular, the dimensionful coupling g between the relaxion and the Higgs has to be less than 10^-20 GeV (for a cut-off scale larger than 10 TeV), the inflation has to last for at least 10^40 e-folds, and the Hubble scale during inflation has to be smaller than the QCD scale.   

The toy-model above ultimately fails: since the QCD axion is frozen at a non-zero value, one effectively generates an order one CP violating θ-term in the Standard Model Lagrangian, in conflict with the experimental bound  θ < 10^-10. Nevertheless, the same  mechanism can be implemented in a realistic model. One possibility is to add new QCD-like interactions with its own axion playing the relaxion role. In addition, one needs new "quarks" charged under the new strong interactions. These masses have to be sensitive to the electroweak scale v, thus providing a back-reaction on the axion potential that terminates its evolution. In such a model, the quantitative details would be a bit different than in the QCD axion toy-model. However, the "strangeness" of the parameters persists in any model constructed so far. Especially, the very low scale of inflation required by the relaxation mechanism is worrisome. Could it be that the naturalness problem is just swept into the realm of poorly understood physics of inflation? The ultimate verdict thus depends on whether a complete and  healthy model incorporating both relaxation and inflation can be constructed. TBC, for sure.

Thanks to Brian for a great tutorial. 

Saturday, 13 June 2015

On the LHC diboson excess

The ATLAS diboson resonance search showing a 3.4 sigma excess near 2 TeV has stirred some interest. This is understandable: 3 sigma does not grow on trees, and moreover CMS also reported anomalies in related analyses. Therefore it is worth looking at these searches in a bit more detail in order to gauge how excited we should be.

The ATLAS one is actually a dijet search: it focuses on events with two very energetic jets of hadrons.  More often than not, W and Z boson decay to quarks. When a TeV-scale  resonance decays to electroweak bosons, the latter, by energy conservation,  have to move with large velocities. As a consequence, the 2 quarks from W or Z boson decays will be very collimated and will be seen as a single jet in the detector.  Therefore, ATLAS looks for dijet events where 1) the mass of each jet is close to that of W (80±13 GeV) or Z (91±13 GeV), and  2) the invariant mass of the dijet pair is above 1 TeV.  Furthermore, they look into the substructure of the jets, so as to identify the ones that look consistent with W or Z decays. After all this work, most of the events still originate from ordinary QCD production of quarks and gluons, which gives a smooth background falling with the dijet invariant mass.  If LHC collisions lead to a production of  a new particle that decays to WW, WZ, or ZZ final states, it should show as a bump on top of the QCD background. ATLAS observes is this:

There is a bump near 2 TeV, which  could indicate the existence of a particle decaying to WW and/or WZ and/or ZZ. One important thing to be aware of is that this search cannot distinguish well between the above 3  diboson states. The difference between W and Z masses is only 10 GeV, and the jet mass windows used in the search for W and Z  partly overlap. In fact, 20% of the events fall into all 3 diboson categories.   For all we know, the excess could be in just one final state, say WZ, and simply feed into the other two due to the overlapping selection criteria.

Given the number of searches that ATLAS and CMS have made, 3 sigma fluctuations of the background should happen a few times in the LHC run-1 just by sheer chance.  The interest in the ATLAS  excess is however amplified by the fact that diboson searches in CMS also show anomalies (albeit smaller) just below 2 TeV. This can be clearly seen on this plot with limits on the Randall-Sundrum graviton excitation, which is one  particular model leading to diboson resonances. As W and Z bosons sometimes decay to, respectively, one and two charged leptons, diboson resonances can be searched for not only via dijets but also in final states with one or two leptons.  One can see that, in CMS, the ZZ dilepton search (blue line), the WW/ZZ dijet search (green line), and the WW/WZ one-lepton (red line)  search all report a small (between 1 and 2 sigma) excess around 1.8 TeV.  To make things even more interesting,  the CMS search for WH resonances return 3 events  clustering at 1.8 TeV where the standard model background is very small (see Tommaso's post). Could the ATLAS and CMS events be due to the same exotic physics?

Unfortunately, building a model explaining all the diboson data is not easy. Enough to say that the ATLAS excess has been out for a week and there's isn't yet any serious ambulance chasing paper on arXiv. One challenge is the event rate. To fit the excess, the resonance should be produced with a cross section of a couple of tens of femtobarns. This requires the new particle to couple quite strongly to quarks or gluons. At the same time, it should remain a narrow resonance decaying dominantly to dibosons. Furthermore, in concrete models, a sizable coupling to electroweak gauge bosons will get you in trouble with electroweak precision tests.

However, there is yet a bigger problem, which can be also  seen in the plot above. Although the excesses in CMS occur roughly at the same mass, they are not compatible when it comes to the cross section. And so the limits in the single-lepton search are not consistent with the new particle interpretation of the excess in dijet  and  the dilepton searches, at least in the context of the Randall-Sundrum graviton model. Moreover, the limits from the CMS one-lepton search are grossly inconsistent with the diboson interpretation of the ATLAS excess! In order to believe that the ATLAS 3 sigma excess is real one has to move to much more baroque models. One possibility is that  the dijets observed by ATLAS do not originate from  electroweak bosons, but rather from an exotic particle with a similar mass. Another possibility is that the resonance decays only to a pair of Z bosons and not to W bosons, in which case the CMS limits are weaker; but I'm not sure if there exist consistent models with this property.  

My conclusion...  For sure this is something to observe in the early run-2. If this is real, it should clearly show in both experiments already this year.  However, due to the inconsistencies between different search channels and the theoretical challenges, there's little reason to get excited yet.

Thanks to Chris for digging out the CMS plot.

Saturday, 30 May 2015

Weekend Plot: Higgs mass and SUSY

This weekend's plot shows the region in the stop mass and mixing space of the MSSM that reproduces the measured Higgs boson mass of 125 GeV:



Unlike in the Standard Model, in the minimal supersymmetric extension of the Standard Model (MSSM) the Higgs boson mass is not a free parameter; it can be calculated given all masses and couplings of the supersymmetric particles. At the lowest order, it is equal to the Z bosons mass 91 GeV (for large enough tanβ). To reconcile the predicted and the observed Higgs mass, one needs to invoke large loop corrections due to supersymmetry breaking. These are dominated by the contribution of the top quark and its 2 scalar partners (stops) which couple most strongly of all particles to the Higgs. As can be seen in the plot above, the stop mass preferred by the Higgs mass measurement is around 10 TeV. With a little bit of conspiracy, if the mixing between the two stops  is just right, this can be lowered to about 2 TeV. In any case, this means that, as long as the MSSM is the correct theory, there is little chance to discover the stops at the LHC.

This conclusion may be surprising because previous calculations were painting a more optimistic picture. The results above are derived with the new SUSYHD code, which utilizes effective field theory techniques to compute the Higgs mass in the presence of  heavy supersymmetric particles. Other frequently used codes, such as FeynHiggs or Suspect, obtain a significantly larger Higgs mass for the same supersymmetric spectrum, especially near the maximal mixing point. The difference can be clearly seen in the plot to the right (called the boobs plot by some experts). Although there is a  debate about the size of the error as estimated by SUSYHD, other effective theory calculations report the same central values.

Thursday, 21 May 2015

How long until it's interesting?

Last night, for the first time, the LHC  collided particles at the center-of-mass energy of 13 TeV. Routine collisions should follow early in June. The plan is to collect 5-10 inverse femtobarn (fb-1) of data before winter comes, adding to the 25 fb-1 from Run-1. It's high time dust off your Madgraph and tool up for what may be the most exciting time in particle physics in this century. But when exactly should we start getting excited? When should we start friending LHC experimentalists on facebook? When is the time to look over their shoulders for a glimpse of of gluinos popping out of the detectors. One simple way to estimate the answer is to calculate what is the luminosity when the number of particles produced  at 13 TeV will exceed that produced during the whole Run-1. This depends on the ratio of the production cross sections at 13 and 8 TeV which is of course strongly dependent on the particle's mass and production mechanism. Moreover, the LHC discovery potential will also depend on how the background processes change, and on a host of other experimental issues.  Nevertheless, let us forget for a moment about  the fine-print, and  calculate the ratio of 13 and 8 TeV cross sections for a few particles popular among the general public. This will give us a rough estimate of the threshold luminosity when things should get interesting.

  • Higgs boson: Ratio≈2.3; Luminosity≈10 fb-1.
    Higgs physics will not be terribly exciting this year, with only a modest improvement of the couplings measurements expected. 
  • tth: Ratio≈4; Luminosity≈6 fb-1.
    Nevertheless, for certain processes involving the Higgs boson the improvement may be a bit  faster. In particular, the theoretically very important process of Higgs production in association with top quarks (tth) was on the verge of being detected in Run-1. If we're lucky, this year's data may tip the scale and provide an evidence for a non-zero top Yukawa couplings. 
  • 300 GeV Higgs partner:  Ratio≈2.7 Luminosity≈9 fb-1.
    Not much hope for new scalars in the Higgs family this year.  
  • 800 GeV stops: Ratio≈10; Luminosity≈2 fb-1.
    800 GeV is close to the current lower limit on the mass of a scalar top partner decaying to a top quark and a massless neutralino. In this case, one should remember that backgrounds also increase at 13 TeV, so the progress will be a bit slower than what the above number suggests. Nevertheless,  this year we will certainly explore new parameter space and make the naturalness problem even more severe. Similar conclusions hold for a fermionic top partner. 
  • 3 TeV Z' boson: Ratio≈18; Luminosity≈1.2 fb-1.
    Getting interesting! Limits on Z' bosons decaying to leptons will be improved very soon; moreover, in this case background is not an issue.  
  • 1.4 TeV gluino: Ratio≈30; Luminosity≈0.7 fb-1.
    If all goes well, better limits on gluinos can be delivered by the end of the summer! 

In summary, the progress will be very fast for new heavy particles. In particular, for gluon-initiated production of TeV-scale particles  already the first inverse femtobarn may bring us into a new territory. For lighter particles the progress will be slower, especially when backgrounds are difficult.  On the other hand, precision physics, such as the Higgs couplings measurements, is unlikely to be in the spotlight this year.

Friday, 8 May 2015

Weekend plot: minimum BS conjecture

This weekend plot completes my last week's post:

It shows the phase diagram for models of natural electroweak symmetry breaking. These models can be characterized by 2 quantum numbers:

  • B [Baroqueness], describing how complicated is the model relative to the standard model;   
  • S [Strangeness], describing the fine-tuning needed to achieve electroweak symmetry breaking with the observed Higgs boson mass. 

To allow for a fair comparison, in all models the cut-off scale is fixed to Λ=10 TeV. The standard model (SM) has, by definition,  B=1, while S≈(Λ/mZ)^2≈10^4.  The principle of naturalness postulates that S should be much smaller, S ≲ 10.  This requires introducing new hypothetical particles and interactions, therefore inevitably increasing B.

The most popular approach to reducing S is by introducing supersymmetry.  The minimal supersymmetric standard model (MSSM) does not make fine-tuning better than 10^3 in the bulk of its parameter space. To improve on that, one needs to introduce large A-terms (aMSSM), or  R-parity breaking interactions (RPV), or an additional scalar (NMSSM).  Another way to decrease S is achieved in models the Higgs arises as a composite Goldstone boson of new strong interactions. Unfortunately, in all of those models,  S cannot be smaller than 10^2 due to phenomenological constraints from colliders. To suppress S even further, one has to resort to the so-called neutral naturalness, where new particles beyond the standard model are not charged under the SU(3) color group. The twin Higgs - the simplest  model of neutral naturalness - can achieve S10 at the cost of introducing a whole parallel mirror world.

The parametrization proposed here leads to a striking observation. While one can increase B indefinitely (many examples have been proposed  the literature),  for a given S there seems to be a minimum value of B below which no models exist.  In fact, the conjecture is that the product B*S is bounded from below:
BS ≳ 10^4. 
One robust prediction of the minimum BS conjecture is the existence of a very complicated (B=10^4) yet to be discovered model with no fine-tuning at all.  The take-home message is that one should always try to minimize BS, even if for fundamental reasons it cannot be avoided completely ;)

Wednesday, 6 May 2015

Naturalness' last bunker

Last week Symmetry Breaking ran the article entitled "Natural SUSY's last stand". That title is a bit misleading as it makes you think of General Custer at the eve of Battle of the Little Bighorn, whereas natural supersymmetry has long been dead bodies torn by vultures. Nevertheless, it is  interesting to ask a more general question: are there any natural theories that survived? And if yes, what can we learn about them from the LHC run-2?

For over 30 years naturalness has been the guiding principle in theoretical particle physics.  The standard model by itself has no naturalness problem: it contains 19 free parameters  that are simply not calculable and have to be taken from experiment. The problem arises because we believe the standard model is eventually embedded in a more fundamental  theory where all these parameters, including the Higgs boson mass, are calculable. Once that is done, the calculated Higgs mass will typically be proportional to the heaviest state in that theory as a result of quantum corrections. The exception to this rule is when the fundamental theory possesses a symmetry forbidding the Higgs mass, in which case the mass will be proportional to the scale where the symmetry becomes manifest. Given the Higgs mass is 125  GeV, the concept of naturalness leads to the following prediction: 1) new particles beyond the standard model should appear around the mass scale of 100-300 GeV, and  2) the new theory with the new particles should have a  protection mechanism for the Higgs mass built in.  

There are two main realizations of this idea. In supersymmetry, the protection is provided by opposite-spin partners of the known particles. In particular, the top quark is accompanied by stop quarks who are spin-0 scalars but otherwise they have the same color and electric charge as the top quark. Another protection mechanism can be provided by a spontaneously broken global symmetry, usually realized in the context of new strong interactions from which the Higgs arises as a composite particle. In that case, the protection is provided by the same spin partners, for example the top quark has a fermionic partner with the same quantum numbers but a different mass.

Both of these ideas are theoretically very attractive but are difficult to realize in practice. First of all, it is hard to understand how these 100 new partner particles could be hiding around the corner without leaving any trace in numerous precision experiments. But even if we were willing to believe in the Universal conspiracy, the LHC run-1 was the final nail in the coffin. The point is that both of these scenarios make a very specific  prediction: the existence of new particles with color charges around the weak scale. As the LHC is basically a quark and gluon collider, it can produce colored particles in large quantities. For example, for a 1 TeV gluino (supersymmetric partner of the gluon) some 1000 pairs would have been already produced at the LHC. Thanks to  the large production rate, the limits on colored partners are already quite stringent. For example, the LHC limits on masses of gluinos and massive spin-1 gluon resonances extend well above 1 TeV, while for scalar and fermionic top partners the limits are not far below 1 TeV. This means that a conspiracy theory is not enough: in supersymmetry and composite Higgs one also has to accept a certain degree of fine-tuning, which means we don't even solve the problem that is the very motivation for these theories.

The reasoning above suggests a possible way out.  What if naturalness could be realized without colored partners: without gluinos, stops, or heavy tops. The conspiracy problem would not go away, but at least we could avoid stringent limits from the LHC. It turns out that theories with such a property do exist. They linger away from the mainstream,  but recently they have been gaining popularity under the name of the neutral naturalness.  The reason for that is obvious: such theories may offer a nuclear bunker that will allow naturalness to survive beyond the LHC run-2.

The best known realization of neutral naturalness is the twin Higgs model. It assumes the existence of a mirror world, with mirror gluons, mirror top quarks, a mirror Higgs boson, etc., which is related to the standard model by an approximate parity symmetry.  The parity gives rise to an accidental global symmetry that could protect the Higgs boson mass. At the technical level, the protection mechanism is similar as in composite Higgs models where standard model particles have partners with the same spins.  The crucial difference, however, is that the mirror top quarks and mirror gluons are charged under the mirror color group, not the standard model color.  As we don't have a mirror proton collider yet, the mirror partners are not produced in large quantities at the LHC. Therefore, they could well be as light as our top quark without violating any experimental bounds,  and in agreement with the requirements of naturalness.


A robust prediction of twin-Higgs-like models is that the Higgs boson couplings to matter deviate from the standard model predictions, as a consequence of mixing with the mirror Higgs. The size of this deviation is of the same order as  the fine-tuning in the theory, for example order 10% deviations  are expected when the fine-tuning is 1 in 10. This is perhaps the best motivation for precision Higgs studies:  measuring the Higgs couplings with an accuracy better than 10% may invalidate or boost the idea.  However,  the neutral naturalness points us to experimental signals that are often very different than in the popular models. For example, the mirror color interactions are  expected to behave at low energies similarly to our QCD:  there should be mirror mesons, baryons, glueballs.  By construction, the Higgs boson  must couple to the mirror world, and therefore it offers a portal via which the mirror hadronic junk can be produced and decay, which  may lead to truly exotic signatures such as displaced jets. This underlines the importance to search for exotic Higgs boson decays - very few such studies have been carried out by the LHC experiments so far. Finally, as it has been speculated for long time, dark matter may have something to do the with the mirror world. Neutral naturalness provides a reason for the existence of the mirror world and an approximate parity symmetry relating it to the real world. It may be our best shot at understanding why the amounts of ordinary and dark matter in the Universe are equal  up to a factor of  5 - something that arises as a complete accident in the usual WIMP dark matter scenario.

There's no doubt that the neutral naturalness is a  desperate attempt to save natural electroweak symmetry breaking from the reality check, or at least postpone the inevitable. Nevertheless, the existence of a mirror world is certainly a logical possibility. The recent resurgence of this scenario has led to identifying new interesting models, and new ways to search for them in  experiment. The persistence of the naturalness principle may thus be turned into a positive force, as it may motivate better searches for hidden particles.  It is possible that the LHC data hold the answer to the naturalness puzzle, but we will have to look deeper to extract it.

Sunday, 26 April 2015

Weekend plot: dark photon update

Here is a late weekend plot with new limits on the dark photon parameter space:

The dark photon is a hypothetical massive spin-1 boson mixing with the ordinary photon. The minimal model is fully characterized by just 2 parameters: the mass mA' and the mixing angle ε. This scenario is probed by several different experiments using completely different techniques.  It is interesting to observe how quickly the experimental constraints have been improving in the recent years. The latest update appeared a month ago thanks to the NA48 collaboration. NA48/2 was an experiment a decade ago at CERN devoted to studying CP violation in kaons. Kaons can decay to neutral pions, and the latter can be recycled into a nice probe of dark photons.  Most often,  π0 decays to two photons. If the dark photon is lighter than 135 MeV, one of the photons can mix into an on-shell dark photon, which in turn can decay into an electron and a positron. Therefore,  NA48 analyzed the π0 → γ e+ e-  decays in their dataset. Such pion decays occur also in the Standard Model, with an off-shell photon instead of a dark photon in the intermediate state.  However, the presence of the dark photon would produce a peak in the invariant mass spectrum of the e+ e- pair on top of the smooth Standard Model background. Failure to see a significant peak allows one to set limits on the dark photon parameter space, see the dripping blood region in the plot.

So, another cute experiment bites into the dark photon parameter space.  After this update, one can robustly conclude that the mixing angle in the minimal model has to be less than 0.001 as long as the dark photon is lighter than 10 GeV. This is by itself not very revealing, because there is no  theoretically preferred value of  ε or mA'.  However, one interesting consequence the NA48 result is that it closes the window where the minimal model can explain the 3σ excess in the muon anomalous magnetic moment.