The loss of the 750 GeV diphoton resonance is a big blow to the particle physics community. We are currently going through the 5 stages of grief, everyone at their own pace, as can be seen e.g. in this comments section. Nevertheless, it may already be a good moment to revisit the story one last time, so as to understand what went wrong.
In the recent years, physics beyond the Standard Model has seen 2 other flops of comparable impact: the faster-than-light neutrinos in OPERA, and the CMB tensor fluctuations in BICEP. Much as the diphoton signal, both of the above triggered a binge of theoretical explanations, followed by a massive hangover. There was one big difference, however: the OPERA and BICEP signals were due to embarrassing errors on the experiments' side. This doesn't seem to be the case for the diphoton bump at the LHC. Some may wonder whether the Standard Model background may have been slightly underestimated, or whether one experiment may have been biased by the result of the other... But, most likely, the 750 GeV bump was just due to a random fluctuation of the background at this particular energy. Regrettably, the resulting mess cannot be blamed on experimentalists, who were in fact downplaying the anomaly in their official communications. This time it's the theorists who have some explaining to do.
Why did theorists write 500 papers about a statistical fluctuation? One reason is that it didn't look like one at first sight. Back in December 2015, the local significance of the diphoton bump in ATLAS run-2 data was 3.9 sigma, which means the probability of such a fluctuation was 1 in 10000. Combining available run-1 and run-2 diphoton data in ATLAS and CMS, the local significance was increased to 4.4 sigma. All in all, it was a very unusual excess, a 1-in-100000 occurrence! Of course, this number should be interpreted with care. The point is that the LHC experiments perform gazillion different measurements, thus they are bound to observe seemingly unlikely outcomes in a small fraction of them. This can be partly taken into account by calculating the global significance, which is the probability of finding a background fluctuation of the observed size anywhere in the diphoton spectrum. The global significance of the 750 GeV bump quoted by ATLAS was only about two sigma, the fact strongly emphasized by the collaboration. However, that number can be misleading too. One problem with the global significance is that, unlike for the local one, it cannot be easily combined in the presence of separate measurements of the same observable. For the diphoton final state we have ATLAS and CMS measurements in run-1 and run-2, thus 4 independent datasets, and their robust concordance was crucial in creating the excitement. Note also that what is really relevant here is the probability of a fluctuation of a given size in any of the LHC measurement, and that is not captured by the global significance. For these reasons, I find it more transparent work with the local significance, remembering that it should not be interpreted as the probability that the Standard Model is incorrect. By these standards, a 4.4 sigma fluctuation in a combined ATLAS and CMS dataset is still a very significant effect which deserves a special attention. What we learned the hard way is that such large fluctuations do happen at the LHC... This lesson will certainly be taken into account next time we encounter a significant anomaly.
Another reason why the 750 GeV bump was exciting is that the measurement is rather straightforward. Indeed, at the LHC we often see anomalies in complicated final states or poorly controlled differential distributions, and we treat those with much skepticism. But a resonance in the diphoton spectrum is almost the simplest and cleanest observable that one can imagine (only a dilepton or 4-lepton resonance would be cleaner). We already successfully discovered one particle this way - that's how the Higgs boson first showed up in 2011. Thus, we have good reasons to believe that the collaborations control this measurement very well.
Finally, the diphoton bump was so attractive because theoretical explanations were plausible. It was trivial to write down a model fitting the data, there was no need to stretch or fine-tune the parameters, and it was quite natural that the particle first showed in as a diphoton resonance and not in other final states. This is in stark contrast to other recent anomalies which typically require a great deal of gymnastics to fit into a consistent picture. The only thing to give you a pause was the tension with the LHC run-1 diphoton data, but even that became mild after the Moriond update this year.
So we got a huge signal of a new particle in a clean channel with plausible theoretic models to explain it... that was a really bad luck. My conclusion may not be shared by everyone but I don't think that the theory community committed major missteps in this case. Given that for 30 years we have been looking for a clue about the fundamental theory beyond the Standard Model, our reaction was not disproportionate once a seemingly reliable one had arrived. Excitement is an inherent part of physics research. And so is disappointment, apparently.
There remains a question whether we really needed 500 papers... Well, of course not: many of them fill an important gap. Yet many are an interesting read, and I personally learned a lot of exciting physics from them. Actually, I suspect that the fraction of useless papers among the 500 is lower than for regular daily topics. On a more sociological side, these papers exacerbate the problem with our citation culture (mass-grave references), which undermines the citation count as a means to evaluate the research impact. But that is a wider issue which I don't know how to address at the moment.
Time to move on. The ICHEP conference is coming next week, with loads of brand new results based on up to 16 inverse femtobarns of 13 TeV LHC data. Although the rumor is that there is no new exciting anomaly at this point, it will be interesting to see how much room is left for new physics. The hope lingers on, at least until the end of this year.
In the comments section you're welcome to lash out on the entire BSM community - we made a wrong call so we deserve it. Please, however, avoid personal attacks (unless on me). Alternatively, you can also give us a hug :)
The experiments have likely underestimated systematic issues, as recently indicated by a theory paper(https://arxiv.org/abs/1607.06440). you hear similar murmurs in the corridors in the experiments. It's a bit too easy to blame statistics. Maybe the theory community should have thought harder about background estimation and at least ask some questions before writing the same bsm paper ca 300 times (I'm also looking at you jester).
ReplyDeleteWhat did you mean by "the loss"? Does this refer to the lack of a press conference at cern? And what if the signal is still there, remaining around some gray 3 sigmas zone and the experiments decided to continue keeping it low?
ReplyDeleteMaybe what went wrong was exactly this, a lack of patience :) Oh, and the citation culture of course - why did we ever need to look further than the citation culture?
Imho the only part of the audience who really cares about spoilers just one week before the opening is a handful of blogs readers, not professionally associated with particles, and that's all. No general public member will go out of their way to ask, no non-lhc physicists will go out of their way to ask, and as history has proven no lhc physicists will go out of their way to speak. After all the latter must be half-dead with ichep analyses reloads as we speak.
ReplyDelete"Finally, the diphoton bump was so attractive because theoretical explanations were plausible. It was trivial to write down a model fitting the data,"
Here we arrive at the core of the poodle, as one says in German. These two sentences are not consistent. Yes, it was trivial to write down a model, that's why so many papers were excreted so quickly. But "plausible"? Please, next-to-next-to-next-to-minimal supersymmetric models "plausible"?
This sort of "signal" is bound to happen again, the question is: how should theorists respond? The most distressing thing this time around was the astounding lack of originality in the proposed explanations. Contrary to what people are saying, we learned very little from this episode: it was just a lot of people running around re-animating their favourite zombies. Next time, can we have something a bit less boring from the theorists, please? Even if it's wrong again, at least we might *really* learn something new.
dude, as someone who generally loves to read you, I am afraid you (and others writing these papers) have been without interesting data for so long that you forgot what garbage smells like and you are confusing it with healthy manure . . . one can always sit and ruminate about reasons why garbage is good, but you toss it to the trash can the moment you actually have something to do.
ReplyDeletetulpoeid: the survival of some unconvincing but strong signal is very unlikely. Basically, there are two options. The 2015 excess was due to noise, or due to real signal.
ReplyDeleteIf it were noise, the excess was almost certainly not repeated, and the 4 sigma got diluted to 1.5 sigma in average. When the luminosity goes up N times, the number of sigmas drops by the factor of sqrt(N) if it's noise. Here, N=5 or so.
If it were signal, due to a real new particle/physics, the situation is the opposite one. The number of sigmas would have gone up by a factor of sqrt(N), from 4 sigma to 10 sigma.
Your intermediate case requires either no new particle and a big positive 3-sigma noise at a predetermined place in 2016 - which is unlikely, 0.3% as the normal statistics dictates. Or a real particle *and* big excess in 2015, or a real particle *and* a big deficit in 2016. These combined hypotheses require "several miracles" at the same moment so they're very unlikely.
So unless the confidence level has grown well above 7 sigma or so, it has almost certainly dropped below 2.5 sigma. Greater datasets simply make the "grey zone" with the uncertainty you talk about disappear. That's why people try to collect more data or accurate data. ;-)
This comment has been removed by the author.
ReplyDelete@ Lubos: Well put. I had a real particle and big excess in 2015 in mind as the most plausible alternative, though only as an example - I'd hate to seriously take the nihilistic tone out of today's post :)
ReplyDeleteWhat is worse than writing 500 articles before we have more data? Writing blog posts claiming some result before the experimental data has been shown.
ReplyDeleteThe run 1 data already indicated that the 2015 data is probably an upwards fluctuation - even if there is a new particle. Still some grey zone left, and the timescale for organizing a press conference would have been quite tight as well, given that most of the data came in this month and data analysis takes some time.
So is it semi-official the bump was a statistical fluke?
ReplyDeleteHi, after 30 years (or is it >40) the 500+ papers score looks pretty much like a last chance effect, for this generation. And one way or another (in the worst and best cases - whatever it means) there will be something to discuss next week... New physics or old sociology?
ReplyDeleteDespair and excitement are the two sides of the same coin. Why flip it?
J.
This is the end of particle physics.
ReplyDeleteIn my mind a more profound theory is needed here. Since string theory is yet to make any concrete results theorists may have to think harder. On the other hand, in terms of money put into research that could be a hurdle. Much money was spent and physicist must come with something besides the Higgs.
ReplyDeleteIf the bumps had survived and with 4 times as much data now be two times as pronounced. Assuming that was the case, would it be justified to announce a discovery of 'something' in an official press conference?
ReplyDeleteNew diphoton data will be presented also at a CERN workshop next friday at 17:
ReplyDeletehttps://indico.cern.ch/event/466926/timetable/#all.detailed
Ambulances will wait outside, in case somebody will need to chase them
(heart attacks happened during Brasil-Uruguay in 1950)
I do not understand, what is wrong about the 500 papers about possible 750GeV particle.
ReplyDeleteThat standard model is not final physical theory is obvious. It does not explain gravity, neutrino masses, baryogenesis and lot of other thinks. Unfortunately there is no experimental evidence guiding theorists to the correct explanation of these thinks. So the theorists speculate - it is their work to speculate. They speculate, how the BSM theory may look like, they speculate, what are the consequences of their BSM theory and they speculate, how the consequences may be verified.
When the possible 750GeV particle was announced, it is work of the theorist to speculate, how such a particle fit to their models. Because there are no clues for building such model, there is lot of speculative models and each of them tries to explain the possible particle (or stay that such particle cannot exist, however I do not know about any paper with this claim). So what is bad that theorists do their work - they speculate?
Why do you think such quantities have to have a further explanation? Even in string theory one has to cook a vacuum that enables fitting data, this is really not an "explanation".
DeleteThis comment has been removed by the author.
ReplyDeleteJust to be completely clear for many of the commentators here:
ReplyDelete- The statistics and the systematics from the experiments are estimated correctly, and the experiments gave them the correct probabilities. Casting ANY doubt on this is just projection, honestly.
- The hype of this bit was completely self-generated by the theory community. Overhype definitely has dangerous consequences, so lessons should be learned here.
- The Kamenik et al article is not correct. The backgrounds are estimated correctly, and even agree with actual calculations at NNLO (https://arxiv.org/abs/1603.02663). Leave the experimental stuff to us, we really honestly do have this. The statistical significance is simply NOT THAT HIGH.
- About the "global" versus "local" significances: The global significance corrects ONLY for repeated "checks" of the same distribution. It does not account for the >1000 other distributions the experimentalists have generated in papers. Eventually, you do run into statistical flukes.
Stay tuned next week at ICHEP. Hope to see you there!
@ Maxim Polyakov,
ReplyDeleteYour rant is unfair and offensive.
BSM physics must exist to account for Dark Matter, the flavor hierarchy problem, baryogenesis, neutrino sector, the naturalness puzzle and may other open challenges. From where we stand today, it appears that perturbative QFT and the traditional Renormalization Group program are no longer adequate for understanding phenomena above the low to medium TeV scale. New analytic tools and concepts are likely needed to successfully solve these challenges and pave the way towards BSM.
Having said that, accusing the entire hep-th community of engaging in a deceitful enterprise is outrageous. Science never follows a linear and predictable evolution and the failures of today are the seeds of tomorrow's paradigm changes.
All science is speculation, some fields a little more and some a little less. To bridge big gaps you might need to speculate a little more. I wouldn't blame it on BSM physicists though. There is a certain crowd (the ambulance chasers) within BSM that is ready to hop onto any train that passes.
ReplyDeleteThe point here is that exprimentalists have *not* reported a 750GeV resonance. At the end-of-year conference they had to say something. Everybody knew that, if you look at a sufficiently large number of experiments, you will find something. Everybody who took his introductory lab courses seriously knew it. The exprimentalists did well in this case. The ambulance chasers saw what they wanted to see.
Its the system that favors this behavior. Researchers who take part in these collective movements benefit in terms of papers and citations. Their host institutions benefit too. They can report the numbers back to politics.
It seems that science has grown to a size at which it becomes vulnerable to such bubbles akin to those that occur in financial markets. They are still small but they can hurt science if they grow bigger. You would need a higher average degree of responsibility in the research staff to avoid it. But why would you expect that researchers are better than financial analysts?
The really sad part about this affair is that the community of people who are likely to write such papers refer to themselves as ''theoreticians''. These people have nothing to do with the traditional meaning of the word theory in the context of physics. They do not know or understand theory. They just learned some simple rules for how to write Lagrangians and put them on the computer to compute scattering amplitudes. This requires almost no understand of real hard-core theoretical physics. By referring to themselves as ''theoreticians'' they are really tainting a long tradition of theoretical physics. The relevant people should be better thought of as Quantum Field Theory Technicians.
ReplyDeleteThis comment has been removed by the author.
ReplyDelete@AlessandroS: Note the timing. The presentations at CERN are the only afternoon talks in the workshop, and they are two hours after the presentations at ICHEP.
ReplyDeleteMaxim, when the evidence for a solar neutrino anomaly was less significant than the 2015 evidence for a diphoton anomaly,
ReplyDelete500+ papers speculated on solar neutrinos and pointed out that one possible interpretation (large mixing angle neutrino oscillations) was testable with reactor anti-neutrinos at ≈100km baseline. The KamLand experiment checked, and was lucky.
Greeks tried to measure the radius of the Earth, and they were lucky: the radius is small enough that old Greek technology was enough to measure it.
Columbus proposed a crackpot theory in order to reduce the earth radius and get money for ships, and he was lucky.
Whether or not new discoveries can be made with our technology is not our choice, and there is no point in complaining "we should have remained in the cave".
This comment has been removed by the author.
ReplyDelete@ Maxim Polyakov,
ReplyDeleteI agree there are deep sociological problems in the way hep-th research is prioritized and conducted nowadays, but blaming all theorists for being excited about potential LHC findings is nonsense.
Given the relentless negative results associated with string/brane theory, supersymmetry, and WIMPs/axions/etc., an objective scientist would be concerned.
ReplyDeleteWhen it is suggested that we solve our problems by moving beyond the trusted scientific method and reverting to medieval metaphysics, e.g., multiverse speculations, one would be justified in being very concerned.
Strange days indeed!
Never before has the phrase "tempest in a teapot" been more apropos. And I mean both the initial overexcitement and the subsequent moralistic scolding. The physics world is not coming to an end because a bunch of theorists got excited. It's coming to an end because experiments aren't turning up anything.
ReplyDeleteAnd just as hopes for physics BSM are dashed; there seems to be something interesting going on
ReplyDeletein Kamioka (T2K).
Michel Beekveld
Neutrino physics seems to be generally in a healthy state. The hints for bmm may come from that direction as it all must fit together. Anyhow, it is not bad that nothing is found. We are ruling out a lot of parameters with a null result. Even if susy can not be ruled out it gets more unlikely. New people should enter the field and come up with things that can be tested within e.g. 30 years. Theory and experiment must work together. Could take another >40 years to the next big discovery.
ReplyDeleteThis, if confirmed would be bigger than Higgs, right? So learned from aforementioned embarrassments what would professional members of the collaborations do, going thorough these last critical checks and reviews while bombarded on all sides by unrelenting pressure for an early preview? You can try staunchly and silently withstand the waterboarding. But there's one or rather two problems with that: 1) rogue member, even 1 : 1000 and 2) no fun. So what if you actually had some fun with it, and every time when asked for a spill of early data made a sad face and shake your head in obvious disappointment? Public conversations too that are so easily overheard. Wouldn't it solve both problems at once and let you have good, honest fun while going through this stressful period?
ReplyDeleteAnd no, I'm not grasping for the hope against all hope. I just want to hear (and see) the data as it's delivered by a collaboration of responsible professionals rather than some vague sourced rumours. At lest this is one point on which those 500 papers would be undoubtedly superior to the rumour science
I reckon you have performed a public service Jester, by bringing us down gently and in stages from what, with hindsight, must be regarded as just a bad luck event, itself little different in its overall effect than a rumour, propagated inadvertently by CERN.
ReplyDeleteI think the title of your blog post would be more accurate without the "after".
ReplyDeleteEvery paper written about the erstwhile bump was each worth more than the entire field of anthropic multiverses. So people got excited, so what? A little time was spent chasing a dream. You all woke up, and now you can brush your teeth and get on with it. It's far, far better than the people who never wake up, or refuse to acknowledge they are awake and dreams aren't real life. The data have spoken, and that's that. No looking back, no regrets. Nothing could be more healthy in science. All this hand-wringing over wasted electrons is ridiculous, in my opinion. At least it was a hint of a result. At least you're acknowledging and paying attention to results, both when they appear, and when they disappear.
ReplyDeleteAn important point that many of the above commentators don't see (I have no idea how you cannot see it) is that six month ago we already knew that we will have collected significantly more data this year and will know, with much higher certainty, if there is a resonance or not. The future program of CERN did not really depend on it.
ReplyDeleteNothing had been lost for science if theorists had waited half a year longer - a blink of an eye in the history of science. It had not been the directional decision, to not explore speculative ideas anymore, that some of the commentators call it. What would have been lost are solely citations for individuals and perhaps a tiny chance for them to be the first to make a pseudo-prediction.
What's at stake for the field is much bigger and affects as well scientists who work in a different way. Researchers from other fields can (and should) use such developments in future funding rounds. I would find it also really hard to explain to the public whats the advantage of science if the way in which we collect the evidence that motivates research becomes literally random.
An important point is being overlooked with respect to the query in this blog:
ReplyDelete"Why did theorists write 500 papers about a statistical fluctuation?"
The question should be "Why did theorists write only 500 papers about a statistical fluctuation?" Where is the work ethic of theorists of years gone by? In my day, far more than 10,000 papers would have been written by this time, with annual conferences and workshops arranged for decades to come. Expenditure on travel and lodging for the theoretical hordes would have been a wonder for all to see. My eyes well with tears at the thought of the banquet tables, groaning under the weight of the gourmet viands. Young theorists: Nose to the grindstone!
LMMI, I agree: there is no need to be upset by the way this thing crashed and burned, as Dorigo says, we can expect plenty more bumps [some of them even near 5 sigma!], so theorists will get to roll the boulder up to the top of the hill many many more times. Great! Just as long as it takes much longer to roll it up than it takes for gravity to take it back down again.
ReplyDeleteBack in the real world: the key thing here is that most of the famous 500 papers could have been written 30 years ago. Indeed, some of them probably *were* -- just by different authors.
From this story I learned two things (my two cents):
ReplyDelete- LHC experimentalists should re-calibrate their definitions of sigmas... I agree that the evidence was weak but the OPERA signal was a systematic (cable) and the BICEP2 a foreground (dust). In both experiments the signal was there and wasn't a statistical fluke. A 4 sigma statistical fluke means we should not care for any future signal at the same level.
-500 papers are OK for me. 4 sigma is a strong signal and there is nothing wrong in writing a paper on this (for example discovery of CMB was 3.5 sigmas and dark energy 2.8 sigma). What is a little more worrying is that, as far as I understand, there was no paper claiming theoretical inconsistencies... i.e. no matter what LHC will found, this can be explained and accepted by the community. That was not the case for OPERA or BICEP2...
Can we now give Steven Weinberg the credits he deserves and make him a world famous celebrity instead of these BSM dudes? I don't think he was present when the Higgs discovery was announced at Cern based on HIS model. He slapped you all in the face guys and there's nothing you can do about it. It's Newton, Einstein, Weinberg not Newton, Einstein, you. Sorry for that.
ReplyDeleteI believe it was a systematic effect of the machine. Both experiments saw it, therefore either CMS was biased by ATLAS or the machine was defective. I do not trust at all the believe that it was an unfortunate statistical fluke. And the idea that "statistics behaves differently at LHC" is naïve, to say the least. People are simply trying to find a way to avoid to put the load on the shoulders of the experimentalists simply because these guys are at Geneva, and so there is the unsubstantiated prejudice that “they cannot do mistake”.
ReplyDeleteAlessandro Melchiorri, your criticisms and recommendations are utterly unreasonable. Whatever will be the number of sigmas you will demand, there will be fake positives sometimes as long as the number is finite.
ReplyDeleteThis bump hasn't grown to 5 sigma and the experimenters haven't announced a discovery. This shows how sensible the (arbitrary) 5-sigma bound it - it was really a success for the experimenters. You should more worry about soft scientific disciplines (healthy lifestyle, effects in the climate, whatever) where 2 sigma is routinely used to make big claims. How big a fraction of *those* results are wrong? A majority of the hype in those disciplines is almost certainly bogus.
It was utterly reasonable for phenomenologists to redirect a significant portion of their work hours to the bump because at 4 sigma or so, the resonance was more likely than other things without any bump evidence. Some of the papers were similar to what the people were working on, anyway, just with a more specific focus, people got a training and a real-world excitement, and I think it's totally sensible for ICHEP to hear lots of these phenomenology talks about the 750 GeV resonance even if everyone knows that the bump will go away on Friday.
Also, there are different reactions about "inconsistencies" because these are different possible pieces of new physics. The OPERA superluminal stuff was implausible and banned by physics as we know it, BICEP2 strong tensor modes were different from prevailing expectations and potentially ground-breaking but also possibly plausible, while an extra resonance seems totally compatible with the principles of physics as we know them now. Get used to the fact that different questions have different answers.
@ Alessandro Melchiorri:
ReplyDelete> LHC experimentalists should re-calibrate their definitions of sigmas
That is mathematics. What you are suggesting is equivalent to "we should define 5+5=7." No. Given the large number of analyses, a 4 sigma fluctuation can happen. And the experiments should report it, everything else would bias the results.
@Anonymous:
> Both experiments saw it, therefore either CMS was biased by ATLAS or the machine was defective.
This is nonsense. The machine is just delivering collisions, and the experiments analyze the data independently. If you think you found an underestimated systematic effect, publish it.
One thing to remember about significance is that if you ask the question "Is there a peak standing out from the noise in this spectrum?" you aren't doing just one analysis. You are asking the same question many times at many different energies along the horizontal axis. So a five sigma fluctuation might show up only once per 3.5 million analyses, but you almost certainly will see such a thing long before you have made 3.5 million spectra. Depending on resolution, bin sizes, the width of the peak you are testing for, etc., you might need only a few tens of thousands of spectra. And certainly high energy physicists have made at least that many independent measurements by now.
ReplyDeleteI'm not enough of a statistician to give prescriptions for how to handle issues of global significance, but that is the issue to look at.
@All: We shouldn't forget that science is inherently risky endeavour. We should acknowledge efforts of Jester for popularizing science in his personal blog, in particular high energy physics, rather than executing him for bringing bad news.
ReplyDeleteI think the real story here is the 500 papers: the answer is not in the datasets but in the way in which we do the science of evaluation--to ignore this kind of science and excuse it as as somehow reasonable is to effectively ignore the real work of science.
ReplyDelete@ Marat S.
ReplyDeleteI don't recall a single comment which attacked Jester for bringing bad news.
If the re-analysis of the data confirms that there is no 750GeV resonance, why is that bad news.
ReplyDeleteThe standard model with its Higgs-mechanism is an excellent approximation to reality, or more. Our way (or rather that of the SM founders) of doing science by forward-extrapolation of the known actually worked. It was likely much better, at least, than future telling which would almost certainly fail because it is non-scientific.
Particle theory is justified. Alternative models that where not unanimously agreed on, because less convincing, got mostly debunked. LHC will likely further constrain such theories. Particle physics progresses like it didn't in the last 40 years, because the exclusion of theories is how you progress.
Theory urgently needed this correction in view of the numerous discussed incoherent extensions. To explain the known mismatches in terms of new models consistent with those new constraints leaves plenty of work to do.
Seriously, why would the fading of the bump be bad news?
@lubos
ReplyDeleteDear lubos, if we start to have 4 sigma hints every year that easily vanish after few months then clearly the 5 sigma threshold is not anymore reliable. The 5 sigma threshold was correct for LEP or previous experiments but maybe is not anymore the case for a machine like LHC where we have an enourmous new amount of data. I think that this is a very sensible question to make.
The BICEP2 claim was in tension with the previous upper limit or r<0.12 made by Planck and I would revert your final comment:
unfortunately, our principles of physics as we know them now can lead us to accept any statistical fluke above 4 sigma level.
We are driven just by experimental data (as always) and we need to be the most conservative as we can with next claims.
All the excitment about the 750 GeV "hint" was due, in my opinion, to the fact that it appeared the last hope to see something "BSM" in LHC data, that would confirm that the supersymetric/superstring theories that so many theorists were working for at least 30 years were related to reality.
ReplyDeleteIf nothing appaear, these theories will remain theoretical, mathematical ideas, that coud appear to be perhaps relevant to the Planck scale, but not before, and so "dead ends".
In my opinion, BSM is already in front of us. It is Dark Matter, and, more puzzling, Dark Energy. But it's a new challenge, and few theorists have ideas how to tackle the problem. So it is difficult to write meaningful papers in this area...
it's good to be critics, but physics is by far the discipline with the most serious evaluation of the significance of anomalies.
ReplyDeleteIn biology a lot of claims turned out to be wrong, based on pre-selected samples.
In social sciences data are routinely faked in order to support ideologies.
a
@Alex Small that's exactly the point Jester made in the blog post you're replying to. The conclusion appears to be that some theoretical physicists may be ignorant of the look else where effect:
ReplyDeletehttp://cms.web.cern.ch/news/should-you-get-excited-your-data-let-look-elsewhere-effect-decide
"A good rule of thumb is the following: if your signal has a width W, and if you examined a spectrum spanning a mass range from M1 to M2, then the “boost factor” due to the LEE is (M2-M1)/W."
AFAIR, the signal widths of the 2011 Higgs excess for CMS and ATLAS were far narrower compared to those of the 2015 diphoton excess. Yet some theoretical physicists perhaps saw similarities between the two and an opportunity too good to speculate upon.
@All: The experiments also gave global significances. I don't see why this discussion comes up again. Unofficial combinations of the two experiments produced interesting global significances. It is not an excess of the size you would expect in the ~20 studies shown in December.
ReplyDelete@highsciguy:
> Seriously, why would the fading of the bump be bad news?
It is the most boring result.
From Anonymous:
> but physics is by far the discipline with the most serious evaluation of the significance of anomalies.
Indeed. That is part of the reason we got 500 theory papers: experiments don't claim such a significance in a bump every day, and high-energy experiments have a really high rate of repetitions in agreement with older results.
The worst part of this whole affair was the mainstream and popular science press. Multiply by 10 or more the number of stories on the 750 GeV bump, what it could mean and every othe crazy idea somebody could think of. Very boy who cried wolfish...
ReplyDelete@Alain Péan
ReplyDeleteYour statement is misleading in several respects. Yes, the 750 GeV bump probably felt a bit like a last chance for new physics, at least for the next couple of years. But no, it would not have confirmed that the superstring or supersymmetry are related to reality. Those are just the knee-jerk "BSM" scapegoats everyone seems to love to bash.
Dark Matter is not a "new challenge", and there is a forest of theory papers with model proposals. All we need is a search experiment detecting the stuff. With Dark Energy, so far it's not so much a problem as an aesthetic nuisance. If you accept a finely tuned theory, it could simply be the cosmological constant.
I drove through Waxahachie, Texas this month. It left me a little sad to think that we could have explored a lot further in energy. What might have been.
ReplyDelete@mfb
ReplyDelete> It is the most boring result.
Do you think that this is a useful selection criterion?
@ AK
ReplyDeleteI know that there are a lot of experimental, and more important experimental works on dark matter (CDMS, LUX...), but the favoured model was (is ?) WIMPS (weakly interactive massive particle), with CDM (Cold Dark Matter), that is rather massive particle, emerging from supersymetric/superstrings theories.
But the fact that all experimental seraches are so far negative, with more and more stringent bounds, points more and more to other scenarii.
It could be for example axions (and there is currently one experiment looking for them, ADMX). But it is not the BSM model that most people consider. And is it BSM ?
For Dark Energy, it could be "simply" cosmological constant. But Higgs field has a VEV around 246 GeV, so would lead to a much much bigger cosmological constant (and no visible universe). So you have to consider that, due to an unknown mechanism, contributions from all fiels would exactly compensate, and leave only a bare cosmological constant, with an arbitrary small value. It is not natural unless you consider some anthropic principle, which explains nothing.
For everyone here: http://cds.cern.ch/record/2205245 . Fig. 6 gives the evidence of statistical fluctuation.
ReplyDelete> Do you think that this is a useful selection criterion?
ReplyDeleteYes. Interesting = more things to explore. Most scientists would describe that as "good".
CMS sees nothing in the diphoton spectrum: https://pbs.twimg.com/media/CpCXCqDXgAAVvNG.jpg:large
Not sure where the plot comes from. Tweeted by https://twitter.com/physicsmatt
Found the source, CMS published their conf note: https://cds.cern.ch/record/2205245
ReplyDelete(Jester, maybe you can merge the comments?)
dead ;)
ReplyDeletemfb, I can't edit comments in Blogger, I can only delete them :)
ReplyDelete> Yes. Interesting = more things to explore. Most scientists would describe that as "good"
ReplyDeleteWell, if that's the majority attitude of the community, I guess I know where the diphoton came from.
Maybe someone here wants to read the story of Giovanni Schiaparelli.
CMS excludes events where both photons are in the endcaps. Does anyone know why? I checked the 2015 result, they did the same in the December presentation but added it later for Moriond.
ReplyDeleteA triumph for the Look Elsewhere Effect!
ReplyDeleteI mean, time to Look Elsewhere for jobs.
Seriously: young people: get out of here and don't look back.
@ mfb0
ReplyDeleteThe reconstruction efficiency of photons falls at the forward regions, so people avoid relying on both photons of an event coming from there or at least they use to list such events separately (notice how that efficiency was not even presented in December). Also, no, it wasn't added in Moriond. As about "anyone knows why", granularity, backgrounds, anyone's guess - that might require a small saga :)
"CMS excludes events where both photons are in the endcaps. Does anyone know why? I checked the 2015 result, they did the same in the December presentation but added it later for Moriond."
ReplyDeleteno idea, maybe because they know what they do ¿?
Given the lack of new particles at these higher collision energies, it seems to make sense to focus future research efforts (and $$'s) on building an electron-positron collider that can nail down the pole mass of the top quark as well as nail down the mass/width of the Higgs Boson.
ReplyDeleteAs far as HEP goes, it seems that the most interesting question is: is the Higgs field stable all the way up to GUT energies? Or does the potential change sign and become meta-stable?
Until we know the pole mass of the top quark with greater certainty, this is still an completely open question. It is also a key question to understanding whether the Higgs field is both the source of inflation in the early universe and the source of dark energy in the present universe. Being a scalar field that can start out at large values and which has a local minimum, the Higgs field has the potential to be both the source of inflation and the source of dark energy. (Side note to Alain Pean: the Higgs field has a VEV of 246 GeV, however this by itself tells you nothing about the value of the potential of the Higgs field, V, which has units of energy^4. The potential, V, could have a value of (meV)^4 even if the value of the field is 246 GeV. This just means that the contribution to the potential from fermions and boson nearly-exactly cancels. This cancellation should not be shocking because the sum of the rest mass energy squared in fermions is very nearly equal to the sum of the rest mass squared of bosons.)
A e-/e+ collider could really help to nail down the properties of the coupling of the Higgs field to the top quark as well as its self-coupling. Until this e-/e+ collider is built, there's still plenty of great experiments that will be bringing in new data to help answer the outstanding questions:
What is Dark Matter? What is Dark Energy? What is the source and strength of the inflaton field? Why is there more matter than anti-matter? What is the mass of the active neutrinos? Are there sterile neutrinos and, if so, do they have the rest mass as the active neutrinos?
This is an awesome time to be a part of the physics and astrophysics community, and we shouldn't let the lack of new high energy particles fool us into thinking that there's nothing new to be discovered.
Diphoton results from Atlas
ReplyDeletehttps://indico.cern.ch/event/466926/contributions/2263194/attachments/1321018/1981039/Diphoton2016.pdf
Diphoton results from CMS
https://indico.cern.ch/event/466926/contributions/2263193/attachments/1321038/1981082/diphotonCERN-Aug5-2016.pdf
A dip at 750 GeV in the new data from both detectors.
The CMS diphoton paper can be found here:
ReplyDeletehttp://cds.cern.ch/record/2205275/files/HIG-16-020-pas.pdf
Anonymous @ 5 August 2016 at 15:22, seems like you are doing pretty good in the 5 stages of grief and now move on. If the running LHC couldn't help shape a clear path for future particle physics, I am not sure if a e+/e- collider would help us with that. Higgs self-coupling, top pole mass are all great stuff but are they really worthy the price tag? We are taking about same level cost as LHC and it still needs 1-2 decades of time to come even we decide to do it today. There are many inspiring questions to ask about the fundamentals of nature. Instead of alluring young people to work on something coming in far far future, many others sectors: neutrino, LHC, astrophysics, etc. might give you a much better shot.
ReplyDeletewhy e-e+ and not mu- mu+? Much larger coupling to the Higgs...
ReplyDeleteDollars to donuts the reason CMS requires at least one central photon is that the fake rate for the endcaps is is much higher...
ReplyDeleteComing from a HEP refugee with LEP/Tevatron experience...
Anon @ 15:22,
ReplyDeleteI completely agree. Your positive attitude ought to be an inspiration against all naysayers who predict the end of particle physics...
Well said Rastus. I passed up a postdoc last summer because I got sick of pumping out trash papers just to get the next job. When I read about this bump last year I worried a bit that I may have left prematurely. But the subsequent deluge of papers and now the death of this bump has confirmed my decision.
ReplyDeleteSo older grad students and young postdocs : switch to machine learning/data science while you still have a chance! The work is just as intellectually stimulating if not more so - you'll do some interesting math instead of just writing MadGraph scripts all day, you can actually discover real things in the data and get paid 3X as much.
Does anyone have any knowledge/thoughts on these new accelerator technologies and how feasible they are? (Claim of a 100 to 1000 times more acceleration per meter but still some problems with luminosity and reaching high energies)
ReplyDeletehttps://portal.slac.stanford.edu/sites/ard_public/facet/Pages/rpwa.aspx
https://home.cern/about/experiments/awake
(thinking of ways to not have to wait 30+ years for new experiments to push the energy frontier and learn more about nature asap in case the LHC doesn't find anything in the coming years... maybe then, the only way to better understand nature may be in improving accelerators and not theories... anyone working on this in the BSM community?)
Vraiment déçu par ce retour à la normalité :(
ReplyDeleteBon courage à toute l'équipe, vous nous avez déjà fait rêver avec la découverte du boson de Higgs qui n'a pas encore livré tous ces secrets!
Ne perdez pas espoir et continuez à nous faire rêver avec vos découvertes!
Bien amicalement!
Aurélien Lepanot
@tulpoeid: Thanks. ATLAS includes those events, seems to be CMS-specific. They don't appear in the ATLAS spin 0 analysis due to the relative pT cuts, but they are part of the spin 2 analysis.
ReplyDelete@Anomymous 06:18: "maybe because they know what they do ¿?"
Of course they do, and I was looking for an explanation.
@Anomymous 18:01: mu+mu- is MUCH more challenging than a larger e+e- machine. Muon sources, cooling and acceleration in microseconds, muon decay and neutrino [sic!] radiation damage, ... e+e- can produce higgs via ZH, not as nice as a direct mu+mu- -> H but still reasonable.
@Anomymous 22:43: Those are very interesting for more practical applications of accelerators, as in the semiconductor industry or for medical applications. For colliders: we'll see, there are still many open issues to solve and it is not clear if they can be solved at all.
Anon @ 5 August 2016 at 21:22
ReplyDeleteWere you a high-energy experimentalist or theorist ?
I think most data-science work done in the industry is closer in spirit to the kinds of data analyses a high-energy experimentalist does. I am not really sure what skills a high-energy theorist would have that would be useful in an industry data-science job, except maybe a familiarity with advanced math and good general reasoning abilities. My impression is that a theorist looking to transition to industry would pretty much need to learn from scratch about data-science, probability, statistics, computer science, machine-learning, etc. All these skills I guess would be something a particle experimentalist might already know from doing data-analysis.
Thats just my impression. I might be wrong about it.
Appreciate it if you could shed some more light on the specific skills that physicists (both theorists and experimentalists) have that would be readily transferable to data-science positions in the industry.
Anonymous @ 21:22
ReplyDelete".. you can actually discover real things in the data and get paid 3X as much."
3x ???
If you joined a machine learning company and your salary's only gone 3x, you must have been a hell of a high paid postdoc, or you must now be working as a janitor. ;)
Although a layman I find this blog far more informative than New Scientist or Sciam. It has been very exciting to the follow the 750 GeV anomaly. Now I am curious to know:
ReplyDelete1) How much of the MSSM parameter space has now been searched?
2) With the current limits of gluinos and squarks, how plausible is it that supersymmetry will solve the hierarchy problem (asuuming models without too much gymnastics)?
3) With the limits from LHC and recent negative results for WIMP detection - are the no-go- theorems the only hope left for supersymmetry?
Couldn't agree more with scimom @18:33 and highsciguy @17:04
ReplyDeleteIn the absence of a really compelling BSM theory (i.e. really curing the SM shortcomings not just, at best, promising to do so), no doubt 500 (papers) had to be such a big number for the "Quatum Field Theory Technicians" to scan ruthlessly as many partial and loose ends models.
Or, perhaps, 500 (papers) was after all a hopelessly tiny number if the ambition were to hit by chance, through a brilliant validation of the so-called "infinite monkey theorem", the correct BSM theory if any, that would have by definition predicted the ATLAS/CMS null-result at 750 GeV !
Let us hope that theoreticians filled with longing for deep understaning, running computer programs when necessary, listening to the experimantalists for what they realy say and not for what we want them to say, are not yet an extinct breed. And, above all, that we are not scaring out of the field too many of the young people of that breed.
My personal assesment of the theoretical physics field is that there is too much model building and too little attention to deep foundational issues. This is completelly understandable given the structure of individual rewards in the field. The chance that any given person will be able to contribute anything significant to foundational questions within their lifetime is small, and it is much easier to come up with new models that have some tenuous motivation. However these chances are zero if they don't try. I understand students and postdocs have to publish, but people with permanent jobs should change gear. It is ok not having anything to say after 5 years of hard work if you are attacking a difficult problem. If more people did that, we'd get far less papers published but many more actually intesting ones. Another thing that is very interesting and necessary is a rederivation of old results in a new way. For instance, how many (physical) ways are there to derive the rules of GR, or quantum mechanics, or a spin 1/2 repesentation of the lorentz group? Do you really think we know all? And the list goes on... e.g. there are some people that claim to understand quantum measurement issue, really? If so could they do a better job at explaining it to others? Because the textbook explanations are rubbish.
ReplyDeleteI must say that I was surprised that any experienced hep experimenter spent much time on this. I have seen many statistical fluctuations in my career. You may want to research the 7 sigma observation of the D*_s at completely the wrong mass, by DASP at DESY, or the 4.8 sigma observation of the Higgs at 8.5 GeV by Crystal Ball...also at DESY..etc etc etc. If you have 2000 grad students each looking at 100 plots per day....go and figure out how many 3 sigma effects will be seen each year.
ReplyDeleteIt's fairly obvious why theorists have to speculate about statistical fluctuations....it's either that or mumble about 10^16 copies of the SM.
Guzzy, the compatibility between ATLAS and CMS led many theorists to invest so much effort into the peak. ATLAS has now removed 2 events from their old diphoton data. Without these 2 events the ATLAS old peak shifts, making the compatibility with old CMS data less spectacular.
ReplyDeleteI was doing my masters in Utrecht in the presence of Veltman and 't Hooft. Veltman was always mentioning the Desert: no other particles than those of the SM. This even before W and Z were discovered. So I am pleased to have known this long before it now got established.
ReplyDeleteNext comes the question: do we like this? Do we like the absence of SuSy? I can say: Yes, I like this. I like Nature, much more than our fiddling with possible theories. There is some fun in doing it, but it remains fiddling unless it gets supported by Nature.
I think the field is big trouble as the feared outcome of a SM Higgs and a desert appears to be materializing. I remember discussions with Marcela and Howie H. (and other theorists) about the Higgs being all there is "all the way up the chimney" and noticing that it was anathema to them. We are victims of our own success.
ReplyDeleteI left HEP (some might say was pushed out) in 2005. I am now in charge of Quantitative Analytics for the US operations of a top 10 bank by assets. There is a dearth of people that really understand how to organize and process data in the financial world and there are opportunities out there for those willing to make the shift.
This new career is joke compared to what I used to do, however, HEP was never going to pay me $300,000 a year for working 20 hrs a week...
BTW, I co-authored a pretty famous paper on Higgs searches that most here would have known about. It was quite a result at the time and was bourne out by data.
Flakmeister
Not in the field, so I don't think I can throw any bombs. A positive result would have been so exciting that I can fully understand jumping the gun to give a theoretical analysis. But I would raise one question for discussion:
ReplyDeleteWith 500 papers on the topic, should the theoretical analysis have converged to predict a negative result by now?
As I recall, the Cold Fusion flap also raised a flurry of theoretical and experimental interest. A few theoretical models were presented, but by the time the final experiments checked in, the theory community was fairly confident that there was nothing to it. There was a strong contrast with the high Tc superconductivity results of the same era, where theorists were quite happy to develop new models of Cooper pairing in perovskites.
Obviously, if the experimental results are ironclad, you have to figure out theories that are consistent with experiment. But a theorist shouldn't be a rubber stamp, either. Part of the role of a theorist is to push back against dubious results and fanciful theories. I don't know enough about the field to judge how much of that went on.
Hi Flakmeister, I am curious as to who you are. Also, what do you mean by "This new career is joke compared to what I used to do". Can you explain? This might be very useful for many particle physicists who are looking to make the switch from physics to Wall Street/ data analysis.
ReplyDelete@Meyrin Skepic: Do you have some more details about those non-discoveries? I found old DESY archives ( http://www-library.desy.de/report77.html ) but didn't find the described things in them.
ReplyDelete@Anonymous:
> With 500 papers on the topic, should the theoretical analysis have converged to predict a negative result by now?
It is possible that our universe has such a particle. You cannot rule out something that is possible with theoretical papers.
Meyrin Skeptic said: "It's fairly obvious why theorists have to speculate about statistical fluctuations....it's either that or mumble about 10^16 copies of the SM."
ReplyDeleteWhen you put it that way.... I take it back, the 500 papers were good solid work after all. One objection though: he never mumbles. Babbles, yes, but not mumbles.
And now sterile neutrinos go after the 750 GeV bump?!
ReplyDeleteSterile neutrinos ruled out by IceCube
@Anon at Aug-8 23:10
ReplyDeleteI would just as soon remain anonymous. :-)
By joke, I refer to "what qualifies as a quality analysis" and what it takes to satisfy management. The 60 hour work week is now a distant memory except for maybe 2 or 3 weeks a year. The challenge now is explain basic statistical techniques and results to people without any quantitative background. The really advanced actuarial techniques for OpRisk such as LDA are now frowned upon by regulators. (a reflection of the dearth of skill to internally and externally review the analysis.)
The main difference is that the politics in a large financial institution is brutal because merit plays a small role in who decides what. There are no shortage of characters who are clueless bullies trying to climb the ladder or maintain their little fiefdom, sometimes at your expense.
I think the golden age of shifting to the financial sector from HEP is over, at least for the theorists, that being said, people with good quantitative skills are always in demand. Don't underestimate the value of understanding the basics of data presentation in the real world.
If anyone is thinking of making the change, start by immersing yourself in learning fixed income and the associated concepts. Any Hep-Ex Ph.D should be able to learn the basics in 2 weeks. The data is basically simple time-series derived n-tuples (relational database entries). One simple test if you know what you are doing is to use Excel to compute your mortgage payment (interest and principle) from first principles. Also learn how to bullshit your way with SQL, R, SAS, Excel for the interview. You can figure it out on the fly as needed.
-----
Finally, since the topic is diphotons, not many people are aware that there was at one time a ~4 sigma signal at 30 GeV in one LEP experiment when the LEP I, LEP 1.5 and the 161, 172 GeV runs were combined. Only the slug of data from the 183 GeV run put it to rest. Some here may recall L3 "discovered" 2nd order FSR masking as a resonance which Ting announced with great fanfare. After that di-photons were avoided like the plague and recieved little subsequent attention until much later in LEPII.
@RBS:
ReplyDeleteAlthough Im not a professional in particle physics, I suppose that the Icecube experiment did not rule out a 4th neutrino contributing to neutrino mixing at all mass ranges. However, other experiments such as Planck put strong contraints on the effective number of radiation like neutrino types in the early universe (this is most certainly wrongly formulated by me), which means that there is quite some evidence in favor of no 4th neutrino flavour.
BUT:
A 4th neutrino flavour isnt even necessary to create sterile neutrinos in the standard model:
The weak interaction does not apply for right-handed neutrinos and left-handed antineutrinos leaving those particles without any standard model interaction. So although these particles do not interact in any way via SM forces, the SM predicts them. they should interact gravitationally.
Could somebody more knowledgeable on the matter please explain why evidence against a 4th neutrino flavour is always taken as evidence against neutrinos that are sterile due to their chirality?
@Tobias,
ReplyDeleteI don't believe IceCube has much to do with the fourth flavour. It just detects microwave photons that would be emitted by decay of very light particles that wouldn't interact with standard matter in any other way (on the assumption that they're abundant in galaxies). So the absence of a signal in the given range would be just that - no sterile particles of corresponding mass.
Thanks Flakmeister.
ReplyDeleteBtw,
> The main difference is that the politics in a large financial institution is brutal
> because merit plays a small role in who decides what. There are no shortage of
> characters who are clueless bullies trying to climb the ladder or maintain their
> little fiefdom, sometimes at your expense.
Are you sure that you were in experimental HEP before? :)
@Rbs:
ReplyDeletewhy are sterile particles assumed to be nonstable and decay? the regular neutrinos eg. dont decay ever as far as we know.
secondly, if they decay, why should they decay to anything that belongs to the standard model if no interaction is shared with those decay products?
Hi @Anonymous, it's explained here Ice Cube. I'm sure on this board there would be members more qualified than myself to comment further but it's off main topic
ReplyDelete@Anonymous 10 August 9:05:
ReplyDeleteThe sterile neutrino definition "a kind of neutrino that solely interacts with matter through gravity" used in some articles addressed to laypeople (here cited from Schmitz: Hunting the sterile neutrino) is not the definition used in the technical literature. The Introduction to the Light Sterile Neutrinos White Paper https://arxiv.org/abs/1204.5379 says: "A sterile neutrino is a neutral lepton with no ordinary weak interactions except those induced by mixing. [...] It may participate in Yukawa interactions involving the Higgs boson or in interactions involving new physics." So sterile neutrinos can interact non-gravitationally, but not by direct fermion-gauge boson interaction. That's the defining difference to ordinary neutrinos.
@tulpoied
ReplyDeleteYeah, there is serious politics in HEP-Ex, but at the end of the day a few egos get bruised and someone whines a bit that their version of an analysis was not the headliner or they did not get a certain piece of a hardware project. It's hardball but no one gets carried out on a stretcher.
In the corporate world, it's downright gladitorial. In HEP, a Ph.D. is the entrance fee, which for better or worse does imply a certain level of scholarship and academic acumen; in the corporate world, MBAs are a dime-a-dozen and thuggery more than accomplishment is your ticket to the C-suite..
People get shit-canned for the mere reason that a new hire two levels up has been brought in and feels they needs to make "changes". In a meeting, if you don't know who is to be scapegoated for some failure, it is likely that it is you in the crosshairs.
Another difference is that compensation is not always commensurate with skill or responsibility. Overpaid mediocre HEP'sters don't exist. There are no shortage of clowns in Banks whose primary ability is to deflect blame and be a sycophant.
Flakmeister
10 August 2016 at 17:34
ReplyDelete"There are no shortage of clowns in Banks whose primary ability is to deflect blame and be a sycophant."
That's the reason I would never put money in bank stocks.
People earn a lot of money but don't create any value.
@ Flakmeister
ReplyDeleteDefinitely, things are not *that* bad or that random in the HEP world.
But:
> Yeah, there is serious politics in HEP-Ex, but at the end of the day a few egos get
> bruised and someone whines a bit that their version of an analysis was not the
> headliner or they did not get a certain piece of a hardware project. It's hardball
> but no one gets carried out on a stretcher.
No.
They do.
When painfully built careers are ruined and life plans changed to satisfy individuals' or institutional teams' whims, ego games and financial plans, it stops being entertaining. "Politics" in HEP does not mean who gets to get a hardware project. It means never hired again because you proved the experiment's physics coordinator wrong on a minor point (not a hypothetical example, of course). And a very rich and diverse list of other examples which there is no point in expanding on here. The bottom line is that HEP involves good budgets, visible rewards, hierarchical structures proportional to the size of the two previous, a few people who crave to prove they are not inferior (not that there is a lack of sociopaths) and importantly a limited number of job openings. Aren't all these traits common to many jobs, you will say; yes they are, I'm just arguing that a background in HEPex makes one suitable for the financial sector in more than one ways :)
Despite leaving particle theory some thirty five years ago, and pursuing a career in IT mostly in physics departments, I have always retained a keen interest in its progress. The discovery of new physics beyond the Standard Model, along with humans landing on Mars, is in my bucket 'wish-list' of things I would like to see happen before I go. Not being actually in the field, while having friends and family who are (including children), and hence not having invested a lifetime of work in a particular theory but still being close to the area does give one some perspective. Not much has happened in those 35 years apart from the confirmation of the Higgs and the detection of gravitational waves. And the continued confirmation of the Standard Model that I learnt as an undergraduate. And yes - I understand that there are areas we don't understand - not least the incompatibility of the two major theories - QM and GR.
ReplyDeleteFrom the 'outside' I would say that it is not a healthy sign when some 500 theory papers are written in a few months on the mere sniff of a new particle. It smacks of desperation. Even more so, the apparent ability of the BSM theories to fit any new data whatever the numbers, confirms that they are not predictive at all and so fail the falsifiability test. As a colleague once said of a phenomenological talk - 'with that many arbitrary parameters you could fit an elephant'. I suspect that good though ArXiV is, like much of the internet, it also contributes to a degree of feverishness whereas the delays of traditional publishing had a calming effect. Are all these papers really contributing anything, or are they just 'noise' making it virtually impossible for any young mathematical physicists to orient themselves as they join the field? Looking at the younger generation, the finance sector and small start-ups are full of disillusioned string theorists. And thanks to 'big science' it is no longer possible to sit in an insurance office or its modern version, the trading room, and think about travelling on a light beam. In fact, it is virtually impossible to do it sitting in a university office ...
The situation rather reminds me of the position at the end of the 1800s when we thought we knew everything apart from a few niggling issues like the incompatibility of Maxwell's Equations and Classical Mechanics ...
To all the pessimists and naysayers on the future of high energy physics:
ReplyDeleteScientific progress is unpredictable and must not be deterred by temporary failures. The LHC experiments need to go on in full force and new technologies for pushing the energy realm must be pursued, including alternative collider designs. There is simply no turning back at this point.
Bear in mind T. A. Edison and his groundbreaking inventions. He never stopped believing that the electric bulb was possible, despite many consecutive missteps.
@tulpoeid
ReplyDeleteReal politics is not about junior people. If pissing off a PhysCo means you didn't get a tenure track position interview, you were a marginal candidate to begin with. I know plenty of people that pissed off lots of people that now have tenure in no small part because they had real merit.
There is huge difference being "chucked to the curb" at 30 after a post-doc vs being 50 and getting dumped in a re-organization. The former is likely to quickly find a job that will likely double their income, the latter is likely toast and will have to take a 30% paycut at a new institution.
I do agree that the corporate-like structures that HEP experiments have morphed into is similar to the financial world. One big difference is that an LHC-type experiment is really a republic consisting of institutions whereas a Bank is an autocratic bureaucracy. For example, you don;t work for ATLAS, you work on ATLAS.
Bank stocks were an incredible investment at one time as they can be insanely profitable. Going forward there are serious headwinds from collapsing interest rates. Banks live off of the spread, and those spreads have all but collapsed.
Flak
PS Someone should check out PRL and PRD immediately after the November Revolution away back when. Theorists are like moths drawn to a new flame. They are only more desperate now.
How to get a Data Science job as an HEP PhD
ReplyDeleteAnon @ 6 August 2016,
Sorry about the late reply. You're right an HEP PhD alone is not enough to get you a data-science job, but you just need a few weeks/months of additional preparation. Roughly 50% of data scientists have PhDs - it's something the industry highly prefers for some reason, although its not a sufficient condition. This is what you need to study and put on your resume:
Machine Learning - Read a couple of textbooks. Start with "Learning from Data" by Abu-Mostafa, a very concise and easy read. Next read Elements of Statistical Learning by Hastie for a deeper look into many algorithms. Concurrently, do a couple of projects to demonstrate your ML skills - Kaggle has straightforward problems to work on. You'll need to read some blog posts/tutorials to get a decent ranking. Anywhere within top 10-20% will be impressive and give you a lot to talk about during interviews.
Basic computer science - Learn python if you don't already. Read up on the common algorithms and data structures - sorting, search, trees, linked lists etc. Practice some coding problems and hackerrank and read a book like "Crack the coding interview"
Stats/probability - Read a couple of intro stats/probability books and work out some of the problems
The first several interviews will be a learning experience, but if you put some effort into the above, you'll get a DS job after a few months that will pay 100-130K at first, and if you change companies every year or so you can very quickly get to 200K. And the work can be a lot more interesting and meaningful than finance. The first company may be lame, but after that you will be in high demand and you can pick a company/product/domain that you think is meaningful and interesting. Feel free to ask more questions
-Disputationist
I think that a useful statement was made by Maxim Polyakov, here it is,
ReplyDeleteBSM physics must exist to account for Dark Matter, the flavor hierarchy problem, baryogenesis, neutrino sector, the naturalness puzzle and maybe other open challenges.
I keep it in mind in giving a definition of what BSM is. However,
1) I believe that all of these issues are not self-explanatory, all of them would require a bit of discussion, all of them could be relevant or irrelevant for LHC. All of them are theoretical problems, except the 4th one.
2) But above all, I hope we can agree on a point: the will of theorists of addressing these issues (or if you accept this definition, of studying BSM) does not imply that any published paper that concerns these issues is a *theory*. Most commonly it will be a proposal, a guess, or a crap paper. Note that the even the SU(3)xSU(2)xU(1) gauge theory (the one of Glashow, Weinberg, Salam, according to Nobel committee) has the modest label, i.e., "standard model".
Dear Anonymous,
ReplyDelete"Not much has happened in those 35 years apart from the confirmation of the Higgs and the detection of gravitational waves."
Perhaps you are not following the news close enough. On the experimental/observational
side we had nonzero neutrino masses, a nonzero cosmological constant/universe is accelerating, nailing down the DM content in a very consistent cosmological picture.
On the theory side we had AdS/CFT which is a huge step forward in our understanding.
"From the 'outside' I would say that it is not a healthy sign when some 500 theory papers are written in a few months on the mere sniff of a new particle. It smacks of desperation."
I'd rather say it smacks of fierce competition for the big price of explaining a possible BSM signal we've been looking for decades.
"Even more so, the apparent ability of the BSM theories to fit any new data whatever the numbers, confirms that they are not predictive at all and so fail the falsifiability test."
This doesn't make any sense. It was easy to fit the signal because we knew so little about it. Most of the good explanantions that were advanced came with a full list of companion signals that should appear with more luminosity, so that one could discriminate different options with more data. If the signal would've been real this is the way healthy science proceeds and theorists (well many of them) did the right thing.
"And thanks to 'big science' it is no longer possible to sit in an insurance office or its modern version, the trading room, and think about travelling on a light beam. In fact, it is virtually impossible to do it sitting in a university office ..."
Why not? That rant sounds fashionable but it is completely unsubstantiated and you offer no reason for it, apart from some vague longing for the good old times.
Thinking hard is what the most creative people in the field do every day, and they do not whine about "big science" but rather would like to have date at the highest possible energies to guide their thinking.
"The situation rather reminds me of the position at the end of the 1800s when we thought we knew everything apart from a few niggling issues like the incompatibility of Maxwell's Equations and Classical Mechanics ..."
What? We keep repeating that our theory is not good enough so if anything this time is the complete opposite to what you say.
Maybe we should consider (for the first time I might add) WBSM, i.e., WAY Beyond the Standard Model.
ReplyDeleteI'd say BSM physics is now slowly turning into BDSM physics :P
ReplyDeleteLet's consider the following vanilla model (?) for a Way Beyond the Standard Model based on solid theoretical grounds. The ultralight neutrino masses in the neutrino oscillation experiments provides us indirect evidence for the ultraheavy right-handed neutrinos with Majorana masses through the see-saw mechanism. The lepton number violation, ∆L = 2, associated with the ultraheavy Majorana mass can be responsible for generating a lepton asymmetry in the very early history of the universe. Then the quantum anomaly associated with the SM could convert a part of this lepton asymmetry into a baryon asymmetry at a slightly later stage, when its age was a few picoseconds. This is a leading model for explaining the Baryon asymmetry of the universe (I borrow from arxiv.org/abs/0707.3414).
ReplyDeleteNow the experimental nightmare (or Long Night ;-) is we have a long way to go in order to prove (or disprove) this model since it can hardly be probed directly in any foreseeable future.
Nevertheless on the theoretical side I would be interested to know the take of Jester (or any educated commentator) on the recent claim* that it might be possible to solve the fine-tuning problem of the Higgs mass by an accidental cancellation of Higgs mass corrections coming from a singlet scalar responsible for the seesaw mechanism and the right-handed neutrinos (*see conclusion of arxiv.org/abs/1608.00087).
Hi Cedric i think that fine tuning arguments, till now, did not lead us anywhere. Just to put it straight, let me ask whether this is physics or metaphysics. I do not see anything measurable! Rather it seems to me that it is game without rules (but with lot of players, subject to social hierarchies)
ReplyDeleteSolving the fine-tuning by an accidental cancellation? Nice joke.
ReplyDeleteCedric and Anon. @ 19:03,
ReplyDeleteNaturalness in effective field theory involves many subtle aspects and cannot be easily discarded.
Refer, for example, to the so-called "scalar boson proliferation problem":
http://arxiv.org/abs/1603.06131v1
Likewise, naturalness goes hand in hand with the Decoupling Theorem and served as guiding principle for past predictions of new physics, see:
http://philsci-archive.pitt.edu/11529/1/Naturalness,_the_Autonomy_of_Scales,_and_the_125_GeV_Higgs.pdf
Fine-tuning arguments are plain common sense. They informed the theoretical prediction of the charm quark and in retrospect they neatly fit with the existence of anti-matter
ReplyDeleteor the kaons mass difference. This is explained in any good review on the subject, which you should've read carefully if you have such strong feelings on the topic. But, why bother, right?
@23:36
ReplyDeleteThe second of your sources looks interesting, but it is apparently by a PhD student in philosophy.
Is this all we can google from the disciplines of history of science and philosophy about the naturalness problem?
I mean, is there any noteworthy and controversial discussion about it?
@09:13
> Fine-tuning arguments are plain common sense.
Pretty much common sense was also that something else than just Higgs would be found at LHC.
Even if the fine-tuning argument were guaranteed to work, how can it be helpful in the current BSM case if the predictions based on it are so arbitrary?
Could you also please cite the 'good reviews' you mention, or you don't bother?
"something else would be found at LHC". Perhaps I missed something but, has the LHC been closed? It hasn't. The message you're sending to the hundreds of people doing painfully careful experimental analyses is that we (you at any rate) already know there's nothing to find?
ReplyDeleteFor your education: arXiv:0801.2562
I find the review about Naturalness Criterion and Physics at the LHC (arxiv.org/abs/0801.2562) by Giudice a nice readable historical account of the usefulness of this criterion i) for high energy physics heuristics and ii) for ideology to support specific models as well.
ReplyDeleteThe most pedagogical and illuminating discussions about naturalness for standard model physics and beyond I read have been written by James D. Wells in the chapter 4 of his small book Effective theories in physics (www.springer.com/us/book/9783642348914), in The Utility of Naturalness, and how its Application to Quantum Electrodynamics envisages the Standard Model and Higgs Boson (arxiv.org/abs/1305.3434) and in Higgs naturalness and the scalar boson proliferation instability problem(arxiv.org/abs/1603.06131). Nevertheless I regret he does not discuss among the considered solutions to the Naturalness problem of the SM the one that seems to me more interesting than Susy nowadays after the preliminary results from LHC run 2 and the final results from LUX but that's another story.
@08:19
ReplyDeleteI don't at all think that LHC is over. The experimentalists there do an excellent job. Their contribution advances our understanding significantly, by largely out ruling many of the mutual exclusive standard model extensions. We start to see more clearly now.
ALICE and other experiments that are often neglected also continue to provide relevant insights.
The reference you give is a monograph by a (respectable) hep physicist who extensively employed the principle to predict BSM physics at LHC. I agree to add it to the list of opinions, though my list of plain Naturalness->SUSY at LHC papers is much much longer.
Anon @ 14:52
ReplyDeleteThere is another helpful reference by Giudice, written after the first LHC run, see:
http://arxiv.org/pdf/1307.7879v2.pdf
I am confused by the comment of "Anonymous 16 August 2016 at 09:13".
ReplyDeleteI do not see a clear connection of the statement "Fine-tuning arguments are plain common sense" with the subsequent statements. From what I studied, antimatter is a property of relativistic wave equation; the one implied by Dirac equation was seen 5 year after. The prediction of 2 neutral kaons is due to Pais and Gell-Mann (1955) and proved two years later. The 4th quark was mentioned in a paper by Maki Sakata Nagakawa (1962) because of quark lepton symmetry, it was mentioned again by Bjorken and Glashow (1964) and its mass was calculated by Glashow Iliapoulos Maiani (1970); then it was discovered in 1974.
In all these cases, a new particle was predicted, with known mass (Dirac, Pais&GM, GIM). Did we learn something similar from "Fine-tuning arguments"? Maybe we know the mass of a single new particle for sure in advance (which one)? Or did we know the higgs mass? Or these arguments helped us to say that the 750 GeV thing was not a particle? Or we predicted the size of the cosmological constant? It seems to me that the answers is a heap of "no".
@16 August 2016 at 09:13
ReplyDeleteFine tuning is not a principle of QM/QFT nor GR, so it is not clear to me that it is "common sense". A possible guiding principle? Maybe... LHC will tell.
If one talks common sense, then the Anthropic principle may be more appealing: Universe where the Higgs mass is order Mp, not suitable for life ergo nobody discussing about fine tuning. Universe with with MH<<Mp, suitable for life, lots of hep-ph doing models using "common sense" to explain it for 30+ years without results.
Not saying fine tuning arguments shouldn't be taken into consideration, could be that like speed of light being fixed, no-fine-tunning is a property of our universe, but it may well not be. I just don't understand why anyone trying to think that it may not be a good guiding principle should be dismissed as not thinking/having common sense... why do you think that?
@05:35 and @16:28
ReplyDeleteI completely agree with you.
Except, that I don't think that a statement like "Fine-tuning arguments are plain common sense" can still come as a surprise.
It is precisely the problem that some of us are debating. The community has talked itself into believing that solution to fine-tuning problem + perhaps possible dark matter candidate is sufficient motivation for a BSM theory, even if it introduces tons of extra particles (which we haven't seen a trace of) and has a loose end (that needs another explanation to fix).
The argument has been repeated over and over again. It can't come as a surprise now that some think that it can't be questioned.
We have spent too much energy on model building. It was so cheap. Even if any of the models had been (or will be) discovered at LHC it can hardly be celebrated as a success of theory, because a prediction is not a prediction if you make hundred different ones.
Quite on the contrary the SM Higgs. The theory community agreed, experimentalists found it, excellent job.
Is the Higgs naturalness problem not the tale of the electroweak scalar frog that wished to be as big as the cosmological constant bull ghost hunting the China shop of renormalizable quantum field theories at the daunting Planck scale?
ReplyDeleteThis last scale is not well defined indeed since we do not know (if/how it makes sense to discuss) the running of the gravitational constant at the anticipated unification scale of Yang-Mills gauge interactions of the Standard Model. Or is it?
I wish some astrophysics data in the future give us better clues to test (quantum) gravity models at ultra-high energy scales inaccessible to man-made colliders. LHC run 2 could also provide us with some help to test seesaw leptogenesis so let's way and see at what scale it could most probably occur provided general relativity is safe enough against "quantum fluctuation" effects.
For the time being I find it interesting to underline that some minimal nonsupersymmetric SO(10) models "can (i) fit well all the low energy data, (ii) successfully account for unification of the gauge couplings, and (iii) allow for a sufficiently long lifetime of the proton ... [and] once the model parameters are fixed in terms of measured low energy observables, the requirement of successful leptogenesis can fix the only one remaining high energy parameter" (arxiv.org/abs/1412.4776).
Note that this quoted article does not address dark matter issue but one has to keep in mind that as several people believe, and I share this view, dark matter could be just a convenient parametrisation of our ignorance concerning the dynamics of gravitation beyond the solar system and elementary dark matter particles might not exist if I may divert a famous quote of Jean Iliopoulos referring to the Higgs scheme in 1979... I may be wrong like the latter statement proved to be in 2012. Let's see next decade the outcomes from the Euclid mission, Darwin experiment and the like.
Meanwhile spectral noncommutative geometers and more physicists may have better understood the fin(it)e (operator algebraic) structure of spacetime that the Higgs boson might have uncovered according to their claim. Then they would have polished a testable mimetic gravity scenario and have something to say about the flavour structure of the Standard Model that could be falsified as well...
Trying to absorb some operator algebra formalism (not to mention KO theory ;-) to model the geometry of spacetime in a more subtle way than the standard classical commutative differential way is probably not the best medical prescription for a hangover. But may be one could expect from some high energy physicists some discussion about the spectral action principle devised by Connes and Chamseddine as a possibly interesting tool (1004.0464) for model building beyond our beloved (and up to the TeV scale efficient) renormalizable local quantum field theory practice...
ReplyDeleteIn the light of :
i) the "special singlet scalar model where the SM instability is avoided by a tree level effect with small couplings" that is canonically derived from the spectral action with SM fermions + right handed neutrinos as input (see chap 6 of 1303.7244) and is essential to postdict the correct masses of the Higgs boson and the top quark (provided a unification scale in the range 10^13−10^17 GeV see 1208.1030);
ii) the existence of particular Pati-Salam models with gauge coupling grand unification and scalar spectrum derivable as an output provided spacetime is modelled as a specific noncommutative manifold (1507.08161);
iii) the former specific noncommutative geometry is a natural "solution" of an operator theoretic equation describing volume quantised 4D manifolds (1409.2471);
iv) the mimetic dark matter model of Chamseddine and Mukhanov can be derived from a quantisation condition on ordinary 3d space in the noncommutative geometric framework
(1606.01189);
it appears to me legitimate to wonder if there is not a piece of physical insight to distill from these theoretical developments mostly based on and compatible with current phenomenology ... waiting for harvesting time that will come sooner or later (o_~)
"even if it introduces tons of extra particles". I assume you have in mind supersymmetry when you say this, right? It is amazing to still hear this at this level of discussion. In supersymmetry you introduce (or rather recognize) a beautiful additional symmetry with very deep roots and this forces upon you a doubling of your espectrum. You never "introduce tons of extra particles" and if you view it this way it means you haven't even started to understand the idea, sorry. It might turn out that supersymmetry is irrelevant at the Fermi scale, fine, but at least understand what you're discussing. If you don't complain about antiparticles introducing tons of extra particles you have no right to complain about supersymmetry introducing them.
ReplyDelete"We have spent too much energy on model building. It was so cheap. Even if any of the models had been (or will be) discovered at LHC it can hardly be celebrated as a success of theory, because a prediction is not a prediction if you make hundred different ones."
ReplyDeleteWhat a load of crap.
@09:30
ReplyDeleteI don't know anyone who thinks that the target of supersymmetry is to introduce more particles. The paragraph you wrote is a tautology, since 'supersymmetry' says it all. Whether it is aesthetic is subjective. I think that over time we got too used to accepting extra particles. Nobody, I think, would argue to dismiss supersymmetry as a theoretical concept.
The case of anti-particles is similar but different too. They sort-of had to be accepted as a consequence of the extension of the quantum theory to the relativistic regime. For both of those theories there was already large experimental evidence. As far as I know, its introduction was precisely not guided by the sort of aesthetic argument you give.
@09:31
"What a load of crap."
Well that's an argument that speaks of a qualified opinion ...
@Jester How did that get through the filter?
Dear homonymous Anonymous of 09:30 (I am the one of 16:28) I agree that supersymmetry is a nice and deep symmetry, but I am not sure how we should qualify the field theoretical models that are based on this symmetry.
ReplyDeleteFirst of all, the mass of the supersymmetric particles is not predicted, as you yourself recognize when you say "It might turn out that supersymmetry is irrelevant at the Fermi scale". By contrast,
the mass of the antiparticles is neatly predicted; the difference between these two situations cannot be starker.
Another issue that I find very annoying is the carelessness on the hierarchy problem of the cosmological constant, that affects also supersymmetric models. Even admitting that there is a "problem" to be addressed, it is difficult to admit the "problem" with the higgs mass-scale can be definitively solved in a model where the analogous "problem" with the cosmological constant mass scale remains unsolved.
But, ok, let us sidestep all these questions. Please tell me one single good prediction of supersymmetric models that is useful for experimentalists. Or I misunderstand your position, and you want simply to suggest that we should just proceed till we find something?
"We have spent too much energy on model building"
ReplyDelete> agree.
"Even if any of the models had been (or will be) discovered at LHC it can hardly be celebrated as a success of theory, because a prediction is not a prediction if you make hundred different ones."
> Not agreeing with this! Why finding the model that describes new physical phenomena shouldn't be seen as a success?? Imagine finding the model that describes how transistors work. Would you care if there had been 10.000 guesses before finding the right one?
In the case of new physics, you would be discovering new laws of nature; if that is not worth celebrating in physics then I don't know what is worth celebrating then...
"Not agreeing with this! Why finding the model that describes new physical phenomena shouldn't be seen as a success?? Imagine finding the model that describes how transistors work. Would you care if there had been 10.000 guesses before finding the right one?"
ReplyDeleteYou slightly over-interpret me there. Of course we need phenomenology. And we absolutely should celebrate what we got as a result of previous model building (we hardly did; a lot of people call the SM the 'most boring' theory).
I am only questioning if this type and amount of phenomenology is appropriate, if it is apparent that the principles used to construct models leave obviously so many degrees of freedom that we can't pin it down. What are the odds to discover a BSM, if any, without new experimental input, if we proceed this way?
Sure, the community needs to decide what experiment to build next, but we may need to explain better for the next collider, why we think that this one is going to really really discover more particles.
I said earlier that more speculation may be necessary to find that theory if the new energy scale leaves a big gap, but is it necessary to explore each and every model in all its unobservable detail, even if the experimental evidence is marginal?
In my experience phenomenologists, when they start some new work they browse the internet and look for what has not been done, to fill that gap in the arxiv. Shouldn't we sit down, take time, and ask what should be done to progress, is e.g. naturallness the proper guiding principle, do we understand the foundation well enough ...?
@11:21 "Shouldn't we sit down, take time, and ask what should be done to progress"
ReplyDeleteSocioeconomics of HEP and circlejerking within subfields makes it impossible. Publish or perish + quality over quantity doesn't incentivize sitting down and taking time either.
hep-ph as a whole is in the forefront of most enlightened communities in science's history. /s
@ highsciguy
ReplyDelete"Shouldn't we sit down, take time, and ask what should be done to progress, is e.g. naturallness the proper guiding principle, do we understand the foundation well enough ...?"
Progress in fundamental science is driven by unexpected events and happens through trial and error. BSM theories are built, confronted with experiments and either found acceptable for future developments or refuted. There is no other way around. Hep-th committees are usually uncertain in deciding which priorities are worth pursuing, with so many open questions and "gray" areas in our current understanding of high-energy phenomenology.
For Anonymous 18:55:
ReplyDeleteI agree that the principles, the foundations, need to be assessed more seriously but I am not optimist. I tried so many times to discuss the naturalness fine-tuning hierarchy stuff, very rarely I see useful reactions. Some people do not listen or shout; other ones want to stay in the group; there are people who have opinions, but keep them for themselves.
E.g., here above, I proposed to have a more detailed discussion of the comparison of finetuning principle and the theory of antimatter. No reaction, silence, the important thing is that somebody wrote it somewhere that "finetuning" principle has the same rank that antimatter theory.
Honest question to model builders in general.
ReplyDeleteDo you ever feel bad about the fact that when all is said and done, 99.99% of you will have worked all of your life in useless and meaningless models that will have contributed zero to the advancement of science? Indeed you all could have gone play soccer as AlessandroS suggested, and nothing would have changed (oh well, we learned that nature isn't like this nor like that... yeah yeah, the feel good about your life argument).
Are you just doing physics as if you were chess players? not very meaningful, not very useful, but still quite fun?
It is a genuine question. If you don't agree with the assumptions also I'd like to know your take, trying to be a bit provocative. Answers appreciated!
I am convinced some noncommutative geometers would be happy to hug the BSM community Jester (o_~) but could you explain (in an update of your 11 February 2007 post) why the princess is so reluctant to kiss the frog?
ReplyDeleteIs it because of its abstract operator algebraic formalism? Well if you want to guess how a Higgs boson at 125 GeV could be a hint of "gravity" at the electroweak scale from a "discrete" dimension of spacetime you need to have some good geometric hindsight and vision I suppose. But I am sure physicists will tailor their own analogies to built a proper intuition.
Now as far as model builders are concerned one can trust their mathematical skills to understand the prescriptions in 1004.0464 and 1507.08161, check them with up to date phenomenology and explore seesaw intermediate scales and gauge unification territories with more than the extended survival (hypothesis) guide to guess the scalar spectrum... providing the spectral action principle and the axioms of noncommutative geometry can do the job. This would be my nagging question to Jester and others.
Now it is time for GW150914 correlated with ELF wave in ionosphere ...
ReplyDeletehttp://inspirehep.net/record/1467196?ln=pl
"In addition of my being skeptic against the expectations as supersymmetry will one day be an elementary ingredient of particle theories I would find even more unlikely that the mass values would be just small enough to allow detection of such a symmetry in the next range of experiments. That really sounds a bit too much like wishfool thinking. I do know the arguments I just think they are far too optimistic.
ReplyDeleteHowever let me end with a positive note. You see supersymmetry refers to the spin of particles, a property involving rotations in space. If proven true - contrary to my expectations- supersymmetry would be the first major modification of our views of space and time since Einstein's theory of general relativity in 1915..." from 't Hooft recorded message at the "event adjudicating the bet on SUSY first made in 2000" (to quote a Not Even Wrong post on Monday August 22, 2016) that took place yesterday in Copenhagen (see video www.youtube.com/watch?v=YfLNVeHX_wA, quote starts at 18:25).
I find it interesting that he underlines through SUSY the need for a better geometric vision. I guess he has in mind his research program on the quantum description of black holes and his recent results in "removing" the Firewall problem with "higly non-trivial topological space-time features at the Planck scale" (1605.05119).
Time for another bet with a different perspective on the proper new geometric tools build from the multi-TeV data already understood? Well, a more interesting question is what could be the crucial experiment to untie the abstract nodes of space-time-matter models in my opinion.
Water is a chemical substance that is well understood. It's used ubiquitously and successfully daily in massive number of applications and projects from your washroom to grand electric dams.When temperature falls below freezing point, water freezes into snowflakes whose shape and other parameters are yet to be fully cataloged. The final and complete theory of snowflakes does not yet exist.
ReplyDeleteEvery student of modern physics by second (or even first) year learns two rock solid facts: 1) there is the maximum speed in the physical space that is finite and 2) entangled particles exercise mysterious and instant action at any distance. Most are quite fine with it except when trying to put them together side by side: if the space in a field theory is a physical substance, then there are situations where something, at least information can move in it much faster than the ultimate speed limit. How? Can we use the same or similar mechanism in everyday practice if not to move physical objects then for instant communications? And if the space is only our acronym for the combined effects of some yet unknown forces and interactions, then we already know that those do not need to comply to the speed limit. Why? Can we use it?
Yet we write 500+ papers about non existing resonance and complain about the end of particle physics. Are we cataloging snowflakes?
Well, a large percentage of working people out there do things that are harmless for the economy, the planet and the lives (and intelligence) of others and themselves, so model building is not a bad thing considered globally.
ReplyDeleteNow, there is always the danger that a smart ass like you comes along when the final outcome is known (or so you think) to throw garbage on the honest work of other people, but model builders can live with it because they are used to this kind of talk from other particle physicists that consider themselves better.
However, looking for something that you don't know what it is requires this kind of work and experimentalists need this kind of work. And I'm sorry to say, but whatever work you do and no matter how smart you think you are you will never ever do anything 1% more relevant to the advancement of particle physics than what experimentalists do.
There is your answer, and in enjoying it take into account that I'm no model builder myself. It's just that I never needed to downgrade the work of others to feel that my work is valuable and I really hate to see this done.
Well said!
ReplyDeleteDear Anonymous of 10.23,
ReplyDeleteI'd like to offer a few opinions concerning the important points you raise.
1) I am less certain than you do about what are the needs of experimentalists working in particle physics. In the past, experimentalists expected that the papers of the theorists concern something with a chance to be seen. Nowadays, the community admits that a lot of mess is made, to increment the volume of the business. The conferences are full of people piling up speculations over speculations, calling themselves "theorists", as Feynman, Einstein or Dirac. Scientists are positively judged if they publish a lot, independently on whether their work has been supported by some fact or not. I am not sure this is a progress, even because, nowadays as yesterday, the new experiments in particle physics are built considering the proposals of the theorists.
2) You speak of "harmless work", in connection to the activity of model-building. The question that comes to my mind is whether the activity of the theoretical physicists is really harmless or not. The fuss around the 750 GeV thing had a big impact on popular science journal, newspapers and tv as well. Similarly for the story of superluminal neutrinos, that surely everybody knows, and that was preceeded by a lot of theoretical activity on Lorentz invariance violation and all that. Also the black-hole-at-LHC connection was triggered by PRL 87 (2001) 161602. In my view, what theorists do has positively an impact on the young people interested in science, on the educational system and also on the future of particle physics.
3) Any scientist wants to be recognised for his/her findings. This is normal and fair, but I deem that this implies that they should also take responsibility of what they do. Incidentally most scientists are payed by the taxes of citizens.
4) Perhaps we should read once again Feynman, http://calteches.library.caltech.edu/51/2/CargoCult.htm
Or perhaps, this theorist should be blamed, since he was against string theory.
"The point is that the LHC experiments perform gazillion different measurements, thus they are bound to observe seemingly unlikely outcomes in a small fraction of them." No. This interpretation is wrong. Distributions, taken from either frequentist or from a Bayesian perspective, require two things: large numbers of trials to establish the distributions and identical experiments. The lack of the latter (identical experiments) obviates your intimation. The lack of the former indicates the real problem with 750 GeV. For ambulance chasing theorists -- well, they got their just desserts.
ReplyDeleteLet us not wallow in the valley of despair, I say to you today my friends.
ReplyDeleteAnd so even though we face the difficulties of today and tomorrrow, I still have a dream. It is a dream deeply rooted in the grand unification dream.
I have a dream that one day the physics nation will rise and live out the true meaning of its creed: we hold these truths to be self-evident, that all gauge coupling constant values of the fundamental interactions were created equal.
I have a dream that one day, once dark matter and dark energy have been revealed as the black gold of the sky, the sons of spectral geometry and the sons of M(atrix)-theory will be able to work together through the brotherhood network.
I have a dream that one day even the Planck scale, a scale sweltering with the heat of raging speculations, will be transformed into an oasis of tamed abstraction and testable phenomenology.
I have a dream that our four current fundamental forces will one day live in a spacetime where they will not be judged by the classical physical dimensions of their coupling constants but by their real quantum geometric nature.
harmless > harmful
ReplyDelete