So who's leading this race? What kind of question is that, you may shout, of course it's Strumia! And you would be wrong, independently of the metric. For this contest, I will consider two different metrics: the King Beyond the Wall that counts the number of papers on the topic, and the Iron Throne that counts how many times these papers have been cited.
In the first category, the contest is much more fierce than one might expect: it takes 8 papers to be the leader, and 7 papers may not be enough to even get on the podium! Among the 3 authors with 7 papers the final classification is decided by
Citations, tja... Social dynamics of our community encourages referencing all previous work on the topic, rather than just the relevant ones, which in this particular case triggered a period of inflation. One day soon citation numbers will mean as much as authorship in experimental particle physics. But for now the size of the h-factor is still an important measure of virility for theorists. If the citation count rather the number of papers is the main criterion, the iron throne is taken by a Targaryen contender (trumpets):
This explains why the resonance is usually denoted by the letter S.
Update 09.08.2016. Now that the 750 GeV excess is officially dead, one can give the final classification. The race for the iron throne was tight till the end, but there could only be one winner:
As you can see, in this race the long-term strategy and persistence proved to be more important than pulling off a few early victories. In the other category there have also been changes in the final stretch: the winner added 3 papers in the period between the un-official and official announcement of the demise of the 750 GeV resonance. The final standing are:
Congratulations for all the winners. For all the rest, wish you more luck and persistence in the next edition, provided it will take place.
h-factor limited to diphoton excess papers only? That would be 6 for Nomura and 5 for Kamenik and Strumia.
ReplyDeleteYou may have overlooked some contestants. Tianjun Li has 8 diphoton papers with 476 citations, for instance.
ReplyDeleteIndeed... thanks, I will fix it asap.
ReplyDeleteFixed. There may well be more omissions so the game is still on :)
ReplyDeleteOh puke. Well this was amusing. At least someone is calling it like it is.
ReplyDeleteHi Adam,
ReplyDeleteI think you missed Roberto (Franceschini) for the total number of citations.
Ciao,
Marco
Fun post :).
ReplyDeleteFor the 2nd category I believe it should be Sanz, Franceschini (as noted by Marco), Torre.
Hi Jester -- when do you think the 2016 run results will be presented?
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteIt doesn't look good IMO that none of the physicists listed come from leading institutions such as Harvard, Cambridge UK etc. Ah well: when the bump fades as will be soon rumoured by Jester, at least there is the possibility of other bumps in the data to look forward to.
ReplyDeleteAmbulance chasing is alive and well, yet true science is never a popularity contest.
ReplyDeleteDespite common misconceptions, in the long run, the number of papers and citations is merely a statistical indicator and not a guarantee for success.
Thanks Marco and ML, it's fixed.
ReplyDeleteI didn't think the 3 paper contenders could be competitive so I didn't even calculate calculate their citations.... A few more iterations and I'll get it right :)
Hi Jason Stanidge
ReplyDeleteare you runouring that the bump is fading away?
Hi Jester, you missed Wei Chao (7 diphoton papers with 467 citations).
ReplyDeleteYes - as from a rumour a couple of days ago - the bump IS fading away (both Atlas and CMS).
ReplyDeleteInMyHumbleOpinion - now - the above citations of our heroes should count as minus, because to pollute the environment with a field theory exercise on unpublished evidence of anything is sign of scientific immaturity, in addition to a despicable citation boosting practice orthogonal to science.
Anon, I see only 5 Chao's papers with 439 citations: 1512.05738, 1512.06297, 1512.08484, 1601.00633, 1601.04678.
ReplyDeleteWhat am I missing?
Sorry, my bad. The counting was wrong.
ReplyDelete@Anonymous "....because to pollute the environment with a field theory exercise on unpublished evidence of anything is sign of scientific immaturity,"
ReplyDeleteIn your "humble" opinion, what should theorist have done in the past six months?!
A girl has been told ATLAS does not see an excess in the 2016 data.
ReplyDeleteJester, may be you should give consolation prize (honourable mention) to people like Djouadi (6 diphoton papers, 473 citations) who just missed making it to the winners' list in either of your two categories.
ReplyDelete"...what should theorist have done in the past six months?!"
ReplyDeleteWaited?
@Anonymous: Spare me... please. Wait for what?!
ReplyDeleteShould theorists also wait for "discovery" of Composite Higgs, Supersymmetry, Dark Matter etc. in order to write papers about these topics?!
An experimental result came out... people jumped on it immediately. Oh no... the horror! The agony!
The true champion of 750 GeV should be the one who made the most insights about possible explanations.
ReplyDeleteSome of the people on your podiums should be proud of their work and applauded without irony.
However, I would agree that the 750 GeV bonanza is partly a sociological phenomenon, and that for every insight there were many 'me-too!' papers.
Citation counts are dominated by those who wrote 3 or 2 papers before Xmas.
ReplyDeleteJester you should worry more about having written 2 papers, one irrelevant and the other with a ghost!
ReplyDeletedear Jester, sorry for the 7 $\digamma$ papers. I proposed playing soccer for 7 months, but all other fellow crackpots@cern were too busy working on diphotons, so I got caught. Hopefully everybody will now believe in the negative rumours that spread so easily, leaving 16 days for soccer
ReplyDelete>> The true champion of 750 GeV should be the one who made the most insights about possible explanations.
ReplyDeleteSo obviously the "dip-photon as a portal to warped gravity" should be the winner ...
Well, if the alternative @cern is soccer :(
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteIf it is an actual particle, the winner, if any, will be the one with the right model.
ReplyDeleteIf it is not an actual particle, the winner is everyone not spending much time on it.
May I ask a slightly different question? What about papers written before the hints for the 750 GeV particle were announced? Who would the most cited if you removed all the ambulance chasing papers?
ReplyDelete@anon No, I'm not rumouring that the bump is fading because I don't know.
ReplyDeleteHowever, 2/fb from 2016 could have been presented in time at the recent LHCP in Sweden to confirm that the signal had got stronger. Nothing was added, implying it either had little effect to be worth adding, or even faded the bump. I therefore doubt anything will be officially said about the bump from CERN until ICHEP in August because it needs to keep the publicity machine rolling; to keep the public and media interested.
In fact, I'm so confident that the bump has faded, that I will make a $50 Paypal donation to Lubos Motl's blog if the bump is real. This is the least I can do having recently been made famous by him ;)
Speriamo che sia un allocco e non un gufo
ReplyDeleteI wonder if the negative rumors are being spread by some people to dampen the number of instant papers on (and close to) the day of the announcement
ReplyDeleteI'm hearing rumours that both collaborations have agreed to say nothing ahead of the summer conferences. So the lack news shouldn't be interpreted as the bump fading.
ReplyDeleteThe good news is that the LHC is delivering luminosity at a good rate, despite the issues with the SPS beam dump. If nothing serious goes wrong then we can hope for ~3/fb per week, so they could have over 15/fb by mid-July. This will definitely be enough to reach a conclusion one way or the other.
If there is a new particle, it will be announced at CERN first and it will take about a week to set up the press conference, so if one is announced in July, it would be a strong sign that a new particle has been discovered. If you hear nothing before ICHEP, then the bump is history.
Best comment:
ReplyDelete"A girl has been told ATLAS does not see an excess in the 2016 data."
Shame for the very distinguished theorists (who didn't need citations to get a permanent position) who are in the podium for writing ambulance papers about statistical fluctuations.
@mfb No. The winners are in any case those who wrote the papers and got citations. Because nobody seriously distinguishes for what they got the citations when they apply next time.
ReplyDeleteI wonder a bit that nobody has spoken about another player in the game: the journals and their referees. If the referee process would work and if all these crappy, useless or wrong papers wouldn't get published in a well known journal, it would be a kind of self-goal to put such things on arxiv. At least I think that a sizable number of unpublished work doesn't look good even if they got many citations.
ReplyDeleteUnfortunately, this excess has shown that the system is also not working well. A very good example is for instance 1512.04931 which was on arxiv on the first day and which was one of the first papers published in a journal (PRD). It seems that nobody wondered that the author can explain the excess with couplings of 0.1 and additional fermions above 1 TeV, while all others needed huge couplings and light fermions. Magic? No, the formula in eq. (10) is just wrong by a factor 2000 or so. Sure, it might happen that one can miss that. However, plugging in the values in eq. (10) given in the paper shows that eq. (11) is also wrong by a factor of 10. Are these really the standards of a journal like PRD today?
@Jason Stanidge: Did you see _any_ results with 2016 data there? LHCP was simply too early to have analyses done.
ReplyDelete@Anon June 20, 10:59: 3/fb per week doesn't work, the machine cannot provide sufficient luminosity and turnaround time for that (unless the SPS beam dump gets fixed, then maybe). 2.5 with ideal conditions, 1.5 to 2 is more realistic.
@Anon June 20, 16:06: I don't think most of those who wrote the papers profit much from it. Most of the papers are not even actual publications.
How about a game of thrones for provocative blog posts?
ReplyDeleteMy money is on the White Walkers of error bars crossing the Alps and wiping out the lot of them.
ReplyDeleteHi,
ReplyDeleteNice post !! Lol
Btw, how many of the papers on 750 GeV finally got published ??
The bump did not fade it , it simply was never there at CMS, besides theorists imagination... If you talked to any experimentalist they were very cautious that this looked like a statistical fluctuation. No what shall we do with all those papers, count those citations as negative? Come one people got excited by 2 SIGMAS!!!!!!!!
ReplyDeleteAnon, 20 June 2016 at 01:27: I don't know a simple Inspire spell to retrieve this piece of information. So I'll leave to another blogger...
ReplyDeleteIt would be interesting to compare the ratio no. citations/no. downloads
ReplyDeletefor the sample of 750GeV papers, vs. a sample of 750GeV-unrelated
papers submitted during the same period. Common sense would
suggest this ratio should be << 1. This could enlighten some
specific mechanism of the social phenomenon.
Jason Stanidge, I am happy that I've made you happy by my comment about you. And you made me happy by your pledge of the $50 donation in the case of the discovery. I am taking a screenshot and looking forward to the 750 GeV cernette discovery enhanced by your donation. ;-)
ReplyDeleteThere's a lot of strong opinions up there, so I guess I should clarify mine. I don't think that writing multiple 750 GeV papers is shameful in itself. For me it was the most exciting topic I've ever worked on, and if I had 7 original ideas I would surely write 8 papers too. Of course, the majority of the 400 papers fill an important gap, but I don't think that the fraction is higher than for any other topic. Many papers, including some written by the winners, made interesting contributions that will live longer than this particular bump. So this post is self-irony through the tears, rather than slut shaming as interpreted by some.
ReplyDeleteOn the other hand, I completely agree that there's a problem with our citation culture. It is annoying that an empty or erroneous paper posted in December gets an order of magnitude more citations than solid work submitted a month later. Journals, which in this case had a chance to justify their existence, once again turned out to be completely irrelevant. Unfortunately, I have no good idea what can be done about it. Maybe it's just another example of uncotrollable social dynamics driving a useful metric into irrelevance, just like it happened with the authorship system in HEP-EX.
Finally, there's the separate issue of public perception. By writing 400 papers we wagered some of our credibility, and now there will be an inevitable backlash. I'm afraid there's nothing we can do other than just to bite the bullet.
153 of the 383 papers (and 70 of the first 100) posted on or after December 15, 2016 and citing ATLAS-CONF-2015-081 got published so far
ReplyDeleteCome on, Jester, when papers get inappropriately cited, it's clearly the fault of the citer - the author of the "followup", right? How could you blame anyone else - who can't affect the fact of citation - for that? People should check each other (papers by others) whether the lists of references are appropriate and if they're not, the inappropriate citers should be criticized or even not cited for that reason. It's as simple as that.
ReplyDeleteUnless people are misbehaving for some reasons that ultimately look irrational to me, wrong papers naturally have a lower chance to be cited, much like papers that just copy others, severely derivative or unnatural or unnecessary etc. papers.
I don't really think that the situation is so bad. At the end, when physicists believe that "someone is great", they don't really rely on the brute citation counts, anyway.
The number of citations Nc received by the n-th paper is Nc ≈ 0.2 N (1-n/N)^3 where N ≈ 400 is the total number of papers. Deviations above this phase-space factor allow to identify a few good-quality papers. Of course all 400 papers are irrelevant if there is no diphoton. More in general, all papers about BSM physics are irrelevant if there is no BSM. This is what makes research interesting.
ReplyDeleteHi Jester and all,
ReplyDeleteHaving written 3 papers on diphoton myself, I can tell you that the ride was quite exciting indeed. In my case, I learnt quite a lot of physics I hadn't thought about earlier, and that to me is the biggest take home story, if this resonance indeed fades away. All of my papers (in collaboration) went through rigorous refereeing process, and that made an impact on the quality of the paper as well. The question of scientific ethics, credibility and responsibility remain of course, but given the current state of our field, where we are starving for something experimentally new to celebrate; phd students and post docs feeling the pressure of having to publish continuously, this could not have been avoided. I would not be worried about public perception at the moment, because well public memory is short lived. In any case I agree that there will be a short term backlash that we may have to swallow.
Jester,
ReplyDeleteI agree with your assessment. The problem is not only the corrupt citation culture but the rush by the media to over-hype any tentative anomaly in the data. The thirst for sensationalism often tends to steer the public perception in the wrong direction.
Jester, "live longer than this particular bump"? Is there some ground to the rumors then? ;)
ReplyDelete>> By writing 400 papers we wagered some of our credibility, and now there will be an inevitable backlash.
ReplyDeleteThis reads as if you believe the rumors that the bump is disappearing ...
How is this as an answer for the typical email claiming a citation:
ReplyDelete"Dear xxxx
thank you very much for your email. As you've read we clearly state that the citation list is not exhaustive. However, we do understand your concern for your paper to be cited. We would be happy to cite your paper if you pay us USD10 (for a beer).
We really appreciate not to take this as a joke.
Looking forward for your answer,
with best and kind regards,
The Authors."
The diphoton statistical fluctuation of 2 sigmas dissappeared. A surprise?? no.
ReplyDeleteWho writes that the statistical significance of the di-photon excess is 2-3 sigma is an idiot.
ReplyDeleteIt was very significant in 2015, at least as much as the Higgs in 2012. All together the data would give close to 5 sigma and because of this no combination was made. Then everything is possible, at 5 sigma planes fall...
Most likely if the excess goes away it is combination of bad luck and human behaviour, the two experiments are not that independent.
Several aspects of this situation concerned me about our field.
ReplyDelete1. We are, on the whole, not so good at statistics. The experimental results really showed just how excited to be here. In reality, adding all of the caveats, this was approximately 5% likely to be a statistical fluctuation, just taking into account all of the bumps and jitters in the diphoton spectrum alone. Then account for the other such spectra elsewhere, and the chances of finding a 5% fluctuation SOMEWHERE in the LHC data are pretty darned high. This really should moderate excitement, but instead we bought this hook, line, and sinker.
2. The ensuing media fire was actively fueled by our community. This is professionally irresponsible and self-damaging. Not only does it degrade public trust in our field and science in general, but it can also negatively impact model builders in and of themselves. If you have a comparative review of two grants, and one has a balanced "portfolio" of predictions, interpretations, calculations, new ideas, etc, and the other has a huge proportion of "ambulance chasing", then the smart money should go to the former as a more responsible and mature scientist. Furthermore, the "crying wolf" will cause exactly the same reaction as that nursery tale... if and when we DO find something, it will be harder and harder to convince the public (and funding agencies) that we actually have something to pay attention to, and fund. This damages more than our reputation, it damages our funding.
3. The fact that we hear things like "well, what else would we do for X months?" is pretty disconcerting. This is a sure indication of a "bubble" in the market. Even if this WERE real (and it is highly unlikely to be real), at most a few people correctly "post-dicted" (not "pre-dicted") the results, and the rest are, according to their own admission, not contributing to science in any other way. This is very disconcerting! If "BSM" turns into "SM + (insert your favorite model here)", what exactly would be the advantage of retaining people who didn't get it right? This kind of statement is pretty dangerous to make. There is a long list of things people can do in our field to occupy themselves without waiting around for 2 sigma fluctuations from the experiments.
In the end I hope this makes a shift of the zeitgeist of the field. A little more caution, a little less ambulance chasing, and a bit more attention to quality.
>>> Who writes that the statistical significance of the di-photon excess is 2-3 sigma is an idiot.
ReplyDeleteThey are absolutely not an idiot. The significances quoted ONLY contain penalties for other wiggles in the same plot. Even that reduces the individual significances to around 1-2 sigma each. Combine the two experiments, and you get 2-3 sigma.
THEN you consider how many other distributions have bumps and wiggles, and the probability of getting a few 2-3 sigma excesses (across experiments) is really quite high. Even in December I was saying this had a 50-50 chance of being real, at best.
There's a reason we have the 5 sigma standard for discovery. Some may say it is arbitrary, but it gets around the human biases we have when looking at these results. It turns out our brains are fantastically good at deriving patterns (even when none exist), and fantastically bad at estimating probabilities.
Since this seems to become some kind of theory bashing, may I ask some questions about the attitude of the experimental collaborations:
ReplyDelete- How many internal notes were published about this excess?
- How many people got involved in the analyses of that particular channel?
- How do the numbers look compared to other 'small excesses' like the dibosons at 2 TeV?
At least I have heard that ATLAS has put a lot of man-power into that as well. Of course, nobody outside the collaboration can see or prove that. I don't want to defend theorists which just uploaded some crappy papers to arxiv to collect citations. However, there have been many people who really tried to get some insights what could go on. And I think this is as justified as the efforts on experimental site. And still, I think the biggest failures in these games are the journals which were not able to separate both kind of 'contributions'.
The message at 19:55 should be published in a newspaper. I agree on each word. Personally I will count as negative citations all those theory papers that are simply ambulance chasing papers.
ReplyDelete> All together the data would give close to 5 sigma and because of this no combination was made.
ReplyDeleteWhere is the evidence for the "because of this"? There was no strong motivation to make a combination, with the 2016 run with much larger datasets coming up soon.
> In reality, adding all of the caveats, this was approximately 5% likely to be a statistical fluctuation, just taking into account all of the bumps and jitters in the diphoton spectrum alone. Then account for the other such spectra elsewhere, and the chances of finding a 5% fluctuation SOMEWHERE in the LHC data are pretty darned high.
I perfectly agree if you just consider one experiment. Then you get the ~2 sigma global significance for ATLAS, or 1.x sigma for CMS, with the precise numbers depending on what exactly you look at. The December talks showed ~20 analyses, so you would expect one such fluctuation per experiment. We got one per experiment, great.
But what is the chance that those two excesses happen in the same analysis, at compatible mass values? In the permille range. If this is a statistical fluctuation, it is the weirdest one for quite some time.
>>> Since this seems to become some kind of theory bashing,
ReplyDeleteThere's no theory bashing. There's crappy science bashing. ;)
>>> How many internal notes were published about this excess?
For CMS, there are two internal notes currently: one for the 2016 data, one for the 8 TeV+13 TeV combination.
>>> How many people got involved in the analyses of that particular channel?
If you're asking about how many people performed this particular fit in this particular channel, it's of order 20. But they're the tip of a very, very large pyramid. The underlying hard work (tracker, ECAL, HCAL, lumi monitoring, muon chambers, triggers, calibrations, alignment, tracking, overal reconstruction, photon reconstruction and ID, data certification, data quality monitoring, Monte Carlo generation, simulation, tuning, etc, etc, etc) numbers several hundred, conservatively. This isn't a four-vector generator, there is a massive amount of underlying work that experimentalists are constantly busy with.
If you're asking about how many experimentalists internally ambulance chase excesses, it's negligible. People start these analyses months (years) ahead of time to get everything ready for data. You can't just jump in and say "plot the invariant mass of photon pairs" and get an analysis done.
There are, however, a dozen other places we knew to look at, and already WERE looking at. Dijets, dibosons, ttbar, Z+gamma, HH, VH, vector-like quarks, di-tau, dilepton, etc. In practice, the only analysis that was performed more-or-less because of the diphoton excess was the Z+gamma analysis. The rest were already being done.
>>> How do the numbers look compared to other 'small excesses' like the dibosons at 2 TeV?
What numbers? The number of people? The diboson excess would show up in many, many other places so there were more people already working there (maybe 40). Of course, the diboson (and dijets, and...) analysis teams were already in place.
>>>> I don't want to defend theorists which just uploaded some crappy papers to arxiv to collect citations. However, there have been many people who really tried to get some insights what could go on. And I think this is as justified as the efforts on experimental site.
No one is disputing that there are a few decent papers that didn't just virtually vomit on the arxiv hours after (or before!) the announcement. There is absolutely zero value in adding a simple gauge group and "post-dicting" (not "pre-dicting") the excess in a crappy way.
However, there is exactly zero disincentive to do that, and all to gain. The first papers were the most often cited, and the citation lists were simply cut-and-paste without a single thought of whether they are correct or just garbage. So, by our "citation driven" science metric, crappy scientists get rewarded for doing crappy science as quickly as is humanly possible. It's not science. It's playing a game.
And even THAT is basically forgivable. However, many went even further and played this to the press as a discovery. The media fire that ensued will damage us, mark my words. This is the standard tale of crying wolf. However, it doesn't just hurt the wayward mischievous shepherd theorists... this hurts the entire field. We cannot stand for that sort of nonsense, and should collectively (and loudly) stand against this.
Frankly, people need to get better at statistics to develop a gauge for how exciting a given excess is. This was an appalling measure of our technical ability.
"Not only does it degrade public trust in our field and science in general..."
ReplyDeleteForgive me -- I'm neither a sociologist nor a physicist -- but it isn't obvious to me that this is true. In fact, the public seems to be very interested in things where some kind of "action" happens, even when most of that action doesn't lead to a goal. Take soccer, for instance!
It seems perfectly plausible to me that widely publicizing guesses about things that turn out to be false could more strongly motivate public interest (leading to more support of science) than it motivates them to be skeptical. After all, as you point out, people are pretty bad at statistics, and skepticism doesn't come easily. Nonsense shows about the "unknown" consistently get better ratings than factual documentaries.
I'm not saying for sure it would have a positive effect, just that I have no idea, and I don't see your evidence.
Hello all,
ReplyDeleteAs an outsider to the community (I've been out of the field for 20 years), may I offer another perspective?
First of all, stop flagellating yourselves. Some of you seem to think that this is a morality play. It isn't. There was a tantalizing result and people got excited about it. Hell, I got excited about it. I assume that people go into HEP phenomenology because they are excited about understanding the theoretical implications of experimental results. Thus they posted papers to the ArXiv. Good for them; their ideas got put out there to the community who can decide what is right, wrong or ridiculous. Maybe when there is a signal that doesn't go away (fingers, toes and eyes crossed), some of these papers will increase our understanding.
I also get the sense from some of the commenters that no theoretical papers should be published until results are confirmed. Well, good luck with that. First, fortune favors the bold. Second, if you're an hep-ph person who is driven to understand results, why keep a good idea to yourself? And let's be honest: the experimental result may fade into the background (Ha! Points for that metaphor) but the reward/risk ratio is so high (Nobel Prize, anyone?), what about human nature would deny the prestige of one's peers by being right?
Finally, yes, I've been out of the field for 20 years. But my impression of the field is that there's been little real progress. What theoretical work in particle physics was last rewarded with a Nobel Prize? How long has it been since there was a truly surprising experimental result in particle physics? To me, the problems are straightforward:
1. Without new experimental discoveries, the field is dying.
2. This makes me sad. I'm not surprised that people who are in the phenomenological field get more desperate with every passing year.
3. If you've chosen phenomenology, and you're young (grad student, postdoc) you must be thrilled to have something to contribute, regardless of the strength of the signal.
To conclude, given the lack of experimental guidance, I don't think the problem is with the HEP-PH community at all. The problem is that there is no experimental guidance. How the HEP-PH community deals with that ongoing existential problem, well, I can't say.
As a member of the public can I say that I don't think the "ambulance-chasing" is an issue at all. Instead the issue is that thousands of physicists are engaged in a standard-model big-science groupthink damn-statistics exercise that's so useless it would fanfare some ephemeral pattern in the entrails as a newly-discovered particle. Does this really help our understanding? Will it light our cities or take us to the stars or make the world a better place? The answer is no. And meanwhile the fabulous standard model doesn't tell us what happens in pair production, or what an electron is, or how a magnet works, or how gravity works, or about the strong curvature regime. Even though all this low-hanging fruit is already there in other fields of physics.
ReplyDeleteAnonymous, your brain is fantastically bad at estimating probabilities. Once one experiment tells that the main anomaly is in gamma gamma at 750 GeV, the local significance of the other experiment (3.few sigma) is a lower bound to the combined global statistical significance. The excess was significant. If it will go away, the only correct moral lesson that you can learn is "shit happens". If this makes you happy, you are morally equivalent to lawyers specialised in ambulance chasing
ReplyDelete@19:55
ReplyDeleteyeah - you see: theory@cern, that one may hope to be the worldwide high spot of high energy phenomenology - has a pool of technically very talented researchers, but not equally so on the "scientific" side. Somoene shall say this. People going around taking pictures of anomalies at talks, which gives then rise to ERC grant on Dark Matter, or new wave of youngsters modifying gravity in all possible ways, which is at best orthogonal to any known phenomenological problem - or investigations that recycle over and over the basic mantra of naturalness (minimal next to minimal flavour!?) which builds on the basic assumption of desert up to energy scales too high for any experiment. And the reply is always - well, what the hell should we do, if there is no new physics. Play soccer?
Of course, I believe it's not their complete fault - they were selected, not only educated - and I suppose all this to be the heritage of the past school, 80's, which in a sense left behind a cultural desert in HEP. Why that happened has to be studied historically - but when you have the fathers of a field taking a turn for a dream of a final theory, supporting string theory, etc., boosting the dream of a supersymmetry - dream of a WIMP miracle - and so on - everything becomes clear. As an example, why Weinberg had to do it? He was already famous, he worked on real physics before - why?
For all of you who are concerned about the "public backlash, " do you actually have any evidence to support the claim that we will all suffer because of the 750-bonanza?!
ReplyDeleteLook at the past instances: forward backward asymmetry, any DM "signal", W+jets excess. Each time a new, speculative result comes out someone went to the press and gave a "Eureka!" statement. And yet, we all lived to see tomorrow.
As far as citations go: citations are a stupid, overrated estimate of a quality of anything. This has been true for a while, so nothing new is happening here.
@StevieB, I agree 100%.
>>>> Once one experiment tells that the main anomaly is in gamma gamma at 750 GeV, the local significance of the other experiment (3.few sigma) is a lower bound to the combined global statistical significance.
ReplyDeleteThis isn't true, though. A correct statistical analysis would calculate the joint probability that both see it in one place, and that gets a look-elsewhere effect. On the back of the envelope, this is around 2-3 sigma. You then would have to ask yourself how often such a fluctuation would occur after N measurements (we have ~500 papers published on each experiment, so N>1000). The trials factor alone tells you that there is a high probability for this to occur. Furthermore, this also assumes that we're 100% correct in estimating the probabilities. And that the energy scales are exactly the same. And that the widths of the excesses are the same. And that the excesses are at exactly the right place within resolution.
The right answer is to wait for each experiment to surpass 5 sigma to claim a discovery, exactly like we have always done in the past. We need not wait very long, the experiments will release the results when they are understood.
As to the lack of damage here: This is a dangerous game. Superluminal neutrinos, for instance, damaged the field. Such debacles degrade the public trust. In the echo chamber of scientific enthusiasts, this is all understood, but if you're staring down the barrel of budget cuts and need to convince very fiscally conservative yet drastically scientifically illiterate members of elected bodies, it's a long, hard road to show people that basic science is worthwhile. Each and every misstep reinforces their viewpoint that not only should SCIENTISTS not be trusted, but SCIENCE ITSELF should not be trusted. This is a real, palpable danger. We rely on the public to fund us. If the public thinks we're incompetent, then we will see long-term damage.
Right here on this very thread, dear John Duffield is of the opinion that if science is not immediately applicable to engineering applications, it is useless. And I would venture that Mr. Duffield is NOT scientifically illiterate at all. When someone else who is vehemently anti-science to begin with can cherry-pick spectacular failures to highlight how bad and clueless we scientists are, it has real, observable, and disastrous side effects.
Mr. Duffield, as to your criticisms of the field in general, I would remind you that basic science has historically always shown a very small cost/benefit ratio (i.e. small cost, huge benefit). However, the practical applications of scientific understanding often do not show up for 50+ years. For instance:
- Electric fields: leads to electricity... 50 years after discovery
- Electromagnetic waves: leads to radar, etc... 60 years after discovery
- Quantum mechanics: leads to transistors... 50 years after discovery
- General relativity: allows precise GPS timing... 70 years after discovery
Your (grand?)children may be coming up with fantastic applications for this in the future, but in any case, that's not why we do it. Understanding the universe around us is the primary goal. It just so happens that historically, such understanding leads to phenomenal changes in our lives. You shouldn't be asking "How can this make life better" but instead "What is going on in the first place?"
Anon @ 11:58,
ReplyDelete"For all of you who are concerned about the "public backlash, " do you actually have any evidence to support the claim that we will all suffer because of the 750-bonanza?!"
It's obvious that if you cry wolf too many times, you risk your credibility. Public perception may start to play a role in decisions regarding grants allocation or financial support.
Maybe what would be most intellectually useful is less time chasing the latest bump and more of an almost pedagogical series of "What if?" papers and workshops. Suppose that a group of phenomenologists focused their energy on "What if an upcoming run sees significant excesses in these channels but not those channels, and at an energy above/below some threshold?"
ReplyDeleteIt might tell us something about how constrained BSM physics really would/wouldn't be by the types of models that could/couldn't explain things that current machines might see. It would be intellectually valuable, but it wouldn't require anyone to stake their credibility on whatever bumps are currently the subject of rumors.
To me it was a mistake to present the flukes at the Cern December Jamboree. As the superluminal guys, though not at the same level, they seemed to act under the pressure and the urgency of an "imminent" discovery that ought to be pre-announced. Unfortunately, the "annunciation" turned ou wrong. Again.
ReplyDeleteErvin, nobody claimed that the diphoton exists. It's God who lost credibility by playing dices in such a dirty way. Anyhow nature is quantum, more data data is coming, we will see how nature looks like, and next we decide if we make a party or a funeral. Anon, playing soccer is more useful than complaining about sociology or against the universe.
ReplyDeleteJester,
ReplyDeleteLove what you've been doing with the blog. I had a question for you. Is there any chance you could do a write up on what other signals the people over at CERN are intrigued by? I'm hoping there are hints of other things going on, even if the 750 GeV signal turns out to be a statistical fluke.
> Will it light our cities or take us to the stars or make the world a better place?
ReplyDeleteThe fundamental research of 1850-1950 is literally lighting our cities, with electricity, semiconductors and so on. Our world today would not exist without fundamental research done decades ago.
> And meanwhile the fabulous standard model doesn't tell us what happens in pair production, or what an electron is, or how a magnet works
It allows us to predict their properties with excellent prediction. "But what is really" is philosophy, physics cannot answer those questions. Philosophy cannot answer them either, but they try.
Anon @ 15:45
ReplyDelete" It's God who lost credibility by playing dices in such a dirty way"
You have a good sense of humor...
@Ervin Goldfain:
ReplyDeleteYou can not possibly compare the 750 excess with the OPERA superluminal neutrino result. The OPERA result was due to a faulty experiment. It indeed was very embarrassing that they didn't check everything before rushing to publish the result. In case of the 750, the experiment is (as far as we can tell) performing well, and it is looking like statistics (over which we have little control) is to blame. It's nobody's fault that it turned out that way.
@Anon 18:27,
ReplyDeleteI did not make this comparison, no idea what you are referring to.
Just a layman's comment: I think the physics community shouldn't be so self critical. First of all: how great that this bump appeared - even if it goes away. No one could rule it out - it was a possibility even if no one had thought of such a bump earlier. It shows how important experiments are - the unexpected just might happen. It also shows, that the LHC really is a step into unknown territory -- and when have we last gone to such a place? Second: how important careful evaluation is - how difficult it is to make meaniningful statistical statements. And finally: I hope a lot of kids get interested into science by being able to follow the drama involved in scientific research. Anyway: thanks to Jester for his blog!
ReplyDelete@Anon 23 June 2016 at 15:25,
ReplyDeleteRegarding: "To me it was a mistake to present the flukes at the Cern December Jamboree."
Hold on, are you serious? I don't know what you are remembering about the presentations, but if I recall correctly, the presentations of both experiments were very low key, to the point of avoiding even the word "hint", let alone talk about any potential discoveries. You want experimental physicists to desist from presenting their data?
Thats a little sad the bump is going away...
ReplyDeleteHow credible are the rumors from CERN?
Edwin Steiner
ReplyDeleteyou can judge from the ATLAS slides
https://indico.cern.ch/event/442432/contributions/1946921/attachments/1205572/1759985/CERN-Seminar.pdf
the Higgs observation on the diphoton channel is reported at 1.5 sigma (not clear if local or global), the bump at 750 GeV at 3.6 sigma local, 2 sigma global. Even if the speaker reported this with very low "volume" of voice, this report per se is striking: per sure a revolutionary "annunciation"; it is very inaccurate and naive to say this is an "informal and innocent" public discussion of data. Undisputably, this was an extraordinary claim. However, as was said many times, extraordinary claims require extraordinary evidence, and the amount of data accumulated in 2015 was not enough to provide such a degree of evidence: therefore this result should not have been discussed, period. The simple fact that it was disclosed was an implicit endorsement of its credibility and solidity.
Strumia ad his many friends are not crappy guys. They were fooled by the divious way in which the data have been presented: if I am told that the Higgs is observed at 1.5 sigma and the new ghost at 3.6 sigma, what am I supposed to conclude, that it is time to go and play soccer?
So the game of thrones winners are not the "poor and naive" theoreticians who spent their time doing calculations instead of playing soccer, but the CERN management, who by the way is well paid for doing (also) a decent PR job.
i think the problem we all have with the current situation of our field can neatly be summed up like this:
ReplyDelete1) too many people are into it for the fame
2) too many people have very serious (i.e. existential) pressures to produce something
this necessarily results in a spew of papers with the citation count meaning next to nothing.
so what is then a good metric of a persons achievement? i think the answer is as simple as it is obvious. instead of looking at the citation count of someone you should actually read their papers and see how good they are.
of course, this would take time. time which is rather spent in churning out one's next paper.
Dear anonymous at 06:26, experimentalists should just report data (as they did, and as they will hopefully do again soon) rather than making political decisions and caring about PR. People who look at data are expert and responsible enough to decide for themselves. If you add a PR committee that decides what experimentalists must tell, a tweet by Jester would become more credible than a soviet-style official announcement
ReplyDelete@Anonymous 24 June 2016 at 06:26:
ReplyDeleteI had a look at those slides and they confirm my recollection of a very sober presentation of experimental results. To me there is not the slightest trace of any misrepresentation or raising of false hopes in these slides.
I suspect that you might misunderstand the meaning of the reported data. Let's assume that there turns out to be nothing interesting at 750 GeV. This does not mean that there was anything wrong with the claim of an excess of 3.6 sigma local in the December presentations, or with the fact that this excess was bigger than the one in the Higgs channel you mention. The experimenters did not make these numbers up, they were calculated from the data taken by the detectors. Unless there was some systematic mistake in measuring, modeling, fitting, etc. (which I think you are also not claiming) these numbers are just facts -- they happened.
Nature is quantum mechanical and therefore there is an irreducible randomness in the results of measurements even if you do not make any mistakes. There is some non-zero probability that unusual events that look very interesting just happen a few times more often than expected on average just by chance, even if no unknown physics is going on. All the people to whom those slides were addressed know that.
Many people (including me) got very excited when seeing the December results, but I'm quite sure that nobody took these numbers to indicate that the 750 GeV resonance would be more certain than the Higgs boson, or something crazy like that.
BTW, I do not know if those theorists are poor, but I hardly think they are naive, especially when it comes to their very own field of specialization.
@Anonymous 24 June 2016 at 06:26:
ReplyDelete> Undisputably, this was an extraordinary claim. However, as was said many times, extraordinary claims require extraordinary evidence, and the amount of data accumulated in 2015 was not enough to provide such a degree of evidence
The experiments reported what they saw. It is equivalent to saying "we rolled 5 dice, all showed 6, the probability for that event with balanced dice is X". There was absolutely no claim beyond that. You suggest to (a) not show an existing study, and to make it worse, (b) make the decision what to show depend on the results - which introduces a bias in the shown results.
> if I am told that the Higgs is observed at 1.5 sigma and the new ghost at 3.6 sigma, what am I supposed to conclude, that it is time to go and play soccer?
Yes exactly that. The 2015 dataset was small but at a high energy, the weak Higgs signal is not surprising, and also the chance to see something new is there - but it could also be a statistical fluctuation. We know the Higgs is not one because we observed it in the much larger datasets of 2011 and 2012 already.
@chris 24 June 2016 at 09:41
ReplyDelete>> 1) too many people are into it for the fame
Are you sure they are not into it for other reasons: maybe because they like the field, and/or they think they are capable of making contributions to it ?
BSM bump going away, Brexit... quite a summer we're lining up this year, eh?
ReplyDeleteBump-exit, Brexit,....Now all we need are some citation worthy papers that attempt to explain the connection between the two. ☺
ReplyDeleteAnonymous: I'm not "of the opinion that if science is not immediately applicable to engineering applications, it is useless". I just don't like to see the public and politicians losing faith in fundamental physics because all they've had for decades is the "discovery" of particles which last for no time at all. That's what the problem is, not a bit of ambulance chasing.
ReplyDeletemfb : physics can answer those questions. We do physics to understand the world. Or at least we used to. Before people like you gave up on that.
So, which "what if" papers were the most interesting? I didn't read them sorry.
ReplyDeleteI expect that we are all going through our own 'seven stages' of grieving, since it now appears that our hope for an imminent new discovery had been misplaced. I don't think any blame should be allocated but we know that 'Blaming' is one of the 7 stages, so perhaps not surprising to see it playing out in the comments section above.
ReplyDeleteIt was a refreshing discussion but I'll reserve my grieving till experts rather than rumors have their say. After all, it's only a few more short summer weeks so hold off the grieving routine just yet ;)
ReplyDeleteNick M.-
ReplyDeleteWe could look at the number of 750GeV papers published by each UK university, and the citations to each paper, and correlate those with thr votes for/against Brexit.
> mfb : physics can answer those questions. We do physics to understand the world. Or at least we used to. Before people like you gave up on that.
ReplyDeleteHow would such an answer even look like? We do physics to get models that allow to make predictions. You can call those models "understanding the world", or you can choose not to - doesn't matter, that's all you can get. We have incredibly accurate models of pair production, electron properties, magnetism and so on.
> because all they've had for decades is the "discovery" of particles which last for no time at all
Meanwhile, they were responsible for the development of the world wide web, made modern PET scans possible, contributed to superconductor research for MRT scans, developed heavy ion cancer therapy, advanced grid computing and so on.
Worst comment up to the middle of this page (because I gave up at some point):
ReplyDelete"I would not be worried about public perception at the moment, because well public memory is short lived."
Closely followed of course by "what should we be doing all this time?". (Looking for a meaningful job, perhaps?)
@03:01: the usual stages of grief are denial, anger, bargaining, depression, and acceptance.
ReplyDeleteHi, sorry to interrupt;
ReplyDeletea silly question maybe, and "from the public with short lived memory", not from a physicist making his living from long lived theories: What is wrong in the SM if this 750 GeV stuff just adds to it without anything else? I read Higgs paper and from Goldstone theorem, there should be "at least" one boson for each broken field. Why not 2? Why would 2 bosons imply a new zoo or anything else?
Thanks,
J.
Rumours return, boomerang-wise ;)
ReplyDeletehttps://www.sciencenews.org/article/hints-new-particle-rumored-fade-data-analysis-continues
Yeah, very interesting discussion about the sociology of our field and the 750 GeV rumors. Quite off-topic, though. So everybody missed an important development: due to a new paper published last week https://arxiv.org/abs/1606.07082, Kamenik raises to the 2nd position!
ReplyDeleteThe rumors made a full round, so more rumors are based on old rumors? Okay, clearly time to ignore them.
ReplyDelete@Benno: 750 GeV in association with a top quark? I think that should have been noted by the checks for additional objects in the events and maybe even by the missing Et spectrum.
Any good rumors? Maybe some new bumps?
ReplyDeleteI note that the abstract of M. Veltman at the Lindau Nobel Laureates Meeting announced :
ReplyDelete"The latest experimental results from CERN indicate that there is possibly a non-anticipated particle with a very, very large mass (about 800 times as heavy as the proton). While it is sofar a quite uncertain result, it is nonetheless interesting to speculate on the existence of such a particle. In this lecture some speculations on this possibility will be entertained."
Eventually his conference that took place on Monday (http://www.mediatheque.lindau-nobel.org/videos/36128/lecture-finding-higgs/meeting-2016) did not mention any 750 GeV scalar but was a nice historical overview of the theoretical Higgs construction to make the quantum theory of Yang-Mills fields renormalizable in order to calculate obervable results and compare them with experiment.
One more piece of information from the Lindau Meeting :
ReplyDeleteCarlo Rubia asked CERN officials for the last statistics about the 750 GeV signal at a panel discussion (http://www.mediatheque.lindau-nobel.org/videos/36130/panel-discussion-glimpses-beyond-standard-model/meeting-2016, question raised at 30:00) and got the following answer from Fabiola Gianotti director general :" I will not call it a signal it was a hint, an inconclusive hint...I hope we'll know more soon, time scale a few weeks"
The risk of getting it wrong from a quick analysis of incomplete data is too high for any responsible professional to serve as the source of such rumours and no scientific merit to be gained from them either. So is unreliable source of unreliable rumors worth getting into grief mode or we'd just wait a few more weeks to get the real deal? If no announcement is made before end of this month (that's just over three weeks), then "the truth is out there" :)
ReplyDeleteHas BBC stumbled on a different rumour? This is today's story and they are still excited :) ... or are they just ways behind the tide?
ReplyDeleteHas the LHC discovered a new particle?
Bravo to "Resonaances" and its "Game of thrones..." post that makes the leap into the references of an arxiv article coauthored by Mikhail Shaposhnikov that proposes the "data driven" diphoton standard model background may not be so smooth around 750 GeV due to the interplay between the granularity of the electromagnetic calorimeter and the physics of some electromagnetically decaying boosted neutral mesons (https://arxiv.org/abs/1606.09592)!
ReplyDeleteThis is no rumour though: Tommaso: 750 Gev results 2015
ReplyDeleteIn either outcome we'll learn something very interesting. Would statistical fluctuation simultaneously in two experiments and in the same place be more likely than an intelligent proton screwing up physicist's attempts at cracking the puzzles of Nature? Three body problem :)
@Cédric Bardot: An interesting approach, but I don't see how that would work, and the author doesn't seem to know the experiments.
ReplyDelete- The width shown in Figure 1 is way too large (note the linear scale).
- 6 photons would lead to a nearly 100% conversion rate with odd behavior (track energy does not match calorimeter energy). This should be really obvious for ATLAS and CMS. I don't see any mention of conversion in the whole paper.
- The experiments have methods to estimate and (if necessary) subtract exactly this kind of background.
Hi Jester, the time left until ICHEP has halved since you wrote this post, so Im thinking we can now pretty much abandon hope for a 750GeV signal since surely someone would have leaked any contrary news by now. Is that unduly pessimistic do you think?
ReplyDeleteGiven sensitivity and complexity of the situation I would think that any responsible professional with direct of the latest developments WOULD NOT leak any information prematurely because the risk of getting it wrong at this stage just wouldn't justify whatever gain real or perceived. It's OK to announce the "uncertain hint" once but doing it all over again would make a joke of the team in the eyes of the public. And given the success of this incredibly complex experiment so far I tend to think that its team is made mostly of responsible professionals. So we're really down to chewing on the rumours of unknown origin and quality - or just patiently waiting two or three weeks that remain.
ReplyDeleteYes I think you are right RBS, that announcing 'hints' multiple times would be ridiculous. A more recent rumour is that the bump has relocated ('kangaroo like'!) to a higher GeV, but at lower signifance even than had been claimed for the earlier 750 bump. if this is the case, I can see why nobodyhas been game to say anything... being perceived as the boy who cried wolf would be front of mind.
ReplyDelete@Anonymous, really? This sounds right out of "Three Body Problem" (the link in the post above) I wonder....
ReplyDeleteNo announcement yet, but the ICHEP parallel session schedule is available. Friday 5th, 8:00 to 9:20:
ReplyDeleteSearch for high mass Higgs bosons using the ATLAS detector (15' + 5')
Searches for high-mass neutral Higgs bosons using the CMS detector (15' + 5')
Search for a high mass diphoton resonance using the ATLAS detector (15' + 5')
Searches for BSM physics in diphoton final state at CMS (15' + 5')
+ 6 theory talks afterwards
http://indico.cern.ch/event/432527/sessions/95227/#20160805
I guess that session will be crowded.
Thursday 4th:
Search for supersymmetry with diphotons in pp collisions at 13 TeV with CMS (15' + 5')
Searches for SUSY in photons and tau channels with the ATLAS detector (15' + 5')
http://indico.cern.ch/event/432527/sessions/95215/#20160804
It seems pretty clear 750GeV is dead and buried and equally clear that 'sigma' doesn't mean exactly what we all thought it meant when it comes to HEP experiments!
ReplyDeleteOn the other hand, there is such an avalanche of data being collected that maybe the collaboration might still pull something else out of the hat, but hard to see it could be anything but a massive letdown compared with a confirmed digamma signal.
Anyway, we have managed to pass the time for a few weeks with idle speculation, ICHEP doesn't seem so far to wait now.
ICHEP abstracts appeared online in timetable. Long live supersymmetry ?
ReplyDeleteI'm not sure how is it clear? OK, show 2015 graph with tantalizing hint; show 2016 with nothing there. Maybe you can spend 15+5 min on that, double that for Atlas and CMS. But 6 (six) theory talks, about nothing?
ReplyDeleteOf course they could have put it as #1 with big letters. But that would mean rising stakes high - even higher than the last time with three weeks to go. To me, nothing is clear yet. I guess it will all be clear on the 5th at the end of sessions 3 and 4.
Today in hep-ph there is a paper containing the immortal sentence,
ReplyDelete"The anomaly has led to an excessive number of publications, so we feel that adding one more, and hopefully useful publication can be justified somehow."
What fun, eh?
Hi Jester,
ReplyDeleteWill your SUSY bet with Lubos get decided with the upcoming LHC results? 15 fb^-1 data in CMS plus 15 fb^-1 data in Atlas amounts to 30 fb^-1 from LHC.....
@RBS 17 July
ReplyDeleteI think it is clear, since said anomaly, which would have completely upended physics, must have been safely disovered by now if it were true, but I perceive no 'buzz'' of any kind.
Also, the rumour from reliable sources that it is dead has been well known for a full month and yet nobody (even anonymously) claiming inside knowledge has sought to challenge that. Not even a troll has bothered to do so, as far as I can tell. As for the theory papers at ICHEP 2016, I suppose anyone has the right to submit a theory paper for consideration, but how sheepish they might feel when presenting it is another matter altogether.
Also, there has been no attempt by CERN to prepare the ground for any earth-shattering news, even though they are sitting on mountains of data and analysing the gamma gamma data has probably been their highest priority.
The ever-reliable Dorigo seems to maintain his 'hunch' that their is no new physics within reach at LHC, ever, even though he would be happier than anyone if that turned out not to be the case. This thing is done, I'm afraid to say.
@Anon why guess when we can compare it e.g. with the Higgs discovery. There was the first report that raised the excitement then the discovery announcement at the seminar in CERN. But I don't recall any significant buzz in between, at least weeks rather than days in advance. The results have to be collected, data processed, reviewed and approved by both teams. This takes time and I simply don't see any obvious shortcuts to the destination answer.
ReplyDeleteRBS, your description of the scientific procedure is impressive. Extraordinary claims require extraordinary evidence. Perhaps this is the quite time what people need to finish everything? In a week or two, I suppose that we should know the answer, no?
ReplyDeleteHow could there be thrustable rumors if the data was blinded? Sorry, but this rumor did not even pass my "does it make any sense" filter at stage 1.
ReplyDeleteThere is a new theory paper discussing a possible underestimate of jet+photon background in the relevant mass range which could influence the background fit. Didn't read it fully yet, but the authors seem to know what they are talking about: http://arxiv.org/abs/1607.06440
ReplyDeleteIf their estimates are accurate, the significances could be overestimated a bit (0.3-1 sigma) - the excess is still interesting, but it makes a statistical fluctuation more likely.
@sciing: In analyses like this (where you don't want to wait for the dataset of the full year) unblinding is typically done once the analysis method is fixed and relevant data has been taken - this could have been done quite early.
@Anon it's more credit than I rightfully deserve but for the final answer we'd have to go all the way to the Planck scale and then...
ReplyDeleteOfficially it is/was blinded until a few days before th ICHEP, I guess the famous 15th July.
ReplyDeletehttp://www.quantumdiaries.org/2016/06/17/enough-data-to-explore-the-unknown/
BTW: The rumor was posted on 21th June, so it must be based on the few data until mid of june, less than 3/fb. 2nd filter failed.
@mfb I can't judge their expertise about calculating backgrounds, but they misunderstand statistical issues and the experimental analysis. Their method III of estimating backgrounds describes extrapolating a functional form from sidebands into a signal region. We're told that it was the approach used by ATLAS and CMS. It wasn't. ATLAS and CMS fitted functional forms to the whole diphoton spectrum (with no signal regions, sidebands or extrapolation), once without a signal, and once with a signal.
ReplyDeleteTheir statistical analysis is extremely crude. They don't calculate a log likehood ratio hypothesis test etc; they find the p-value by finding the probability of observing n >= n_observed in the bin with the biggest excess, assuming background only and Poisson statistics, in a goodness of fit test. Even with the limited publicly available data, they could have done much more.
Unsurprisingly, they can't reproduce anything like the official numbers for the significances. But we're then told that although their absolute significances aren't reliable, the delta significances are. This is strange - given the relationship (normal cdf) between sigma and p-value, it's extremely hard to believe significances could be affected by a systematic shift, such that delta significances were correct. I suppose sign (delta significance) could be correct.
The physics might be great, but the stats is shaky (they also say weird things about the LEE). Do you agree?
@mfb: regarding photon background, so we have 8TeV results that confirmed Higgs with great consistency with SM; if there were a systematic error in estimating the background, wouldn't it mean that in the 8TeV run we would also see fewer background events and so higher significance (by the same 1-2 sigma?) than in the channels not affected by the error? Yet nothing like this was reported afaik. So how can it be that we only see it now, if the same approach was used to simulate the background in both experiments?
ReplyDeleteAs recently as LHCP in June, CERN were advising that any discovery level announcement 'has to be' at an on-site seminar, i.e. at CERN. However, there isn't time to organize one of those before ICHEP. Maybe they have changed their policy on this in the month and will surprise us, but sadly that seems less likely than the other option which is that nothing has been discovered. still there is always the possibility that they have one or more signals are 'tantalizing' but below discovery threshold. That seems unlikely in the particular case of 750GeV though, given the scale-up factor for 13TeV data collection since March when the last public discussion took place.
ReplyDeleteI think by now everyone *knows* 750 is dead... overheard several times from various members of the collaboration(s), RIP. Long live ambulance chasing! :)
ReplyDeleteI think Anonymous (23 July 2016 at 07:50) is missing the main point.
ReplyDelete> ATLAS and CMS fitted functional forms to the whole diphoton spectrum (with no signal regions, sidebands or extrapolation), once without a signal, and once with a signal.
The issue seems to be that the parameters of the functional form are determined where most of the data is (at low diphoton masses). Can ATLAS and CMS really claim to capture potential mis-modeling of fakes or higher-orders at large diphoton masses (= where the signal is)? The paper shows that there are issues.
Or look at the binned purity of the diphoton sample. How can ATLAS and CMS claim to have such small errors at high diphoton masses? Did you see how much the errors (and central values in case of CMS) changed going from conf note to preprint?
>ven with the limited publicly available data, they could have done much more.
Of course! But the paper seems to be making a physics point. It would be really interesting to discuss this rather than to nitpick on the statistics crap.
@Anonymous 23 July 2016 at 07:50: Interesting points about the statistical analysis, didn't check that before. Well... we'll know more in 1.5 weeks.
ReplyDelete@RBS: The Higgs is at a much lower mass and with a huge amount of events. The experiments just fit a polynomial to it - something that doesn't work at 750 GeV (maybe with >100/fb, but not this year). Also, you could not detect an unaccounted 1 sigma deviation with just two data points (ATLAS, CMS).
@RBS: Without any glue, it is quite clear that fitting a background at 125GeV is something different than at 750GeV. 1st can be done simple fitting below and behind the hump. Doing for the later one could be an issue, because it is more an extrapolation, there is not much data after the signal region.
ReplyDelete@anonymus: As far as I'm unterstand there could be a fake signals by jets interpretated as Photons, this has nothing to do with how sophisticated your statistical analysis is, has it? If the purity of the di photon signal is decreasing with higher energies, your background would be higher than extrapolated from lower energies. There seems to be not enough data about purity to exclude that this is not the case.
If this lowers the significances by 0.5, 1 or 2 sigma, who cares? It lowers the significanes, that is the message for me!
http://indico.cern.ch/event/432527/sessions/95227/#20160805
ReplyDeleteThere are a few theory talks about the diphoton excess at ICHEP scheduled after the first ATLAS and CMS talks, including one from Jack Gunion. Are we supposed to believe that these people will talk about something that's been denounced just 10 mins before their talks? :)
I guess there is something but not enough to claim discovery. We likely must wait some time more.
ReplyDeleteLooks like 750 GeV is dead. https://cds.cern.ch/record/2205245
ReplyDeleteI really had high hopes for this, ah well.
ReplyDeleteDon't give up people.
Sooo...who do you think will lead the Olympic medal count?
ReplyDeleteFrom the November 1974 revolution with love ;-)
ReplyDelete"During the next one year, more than seven hundred papers were written related to th{e J/Psi} discovery which was a record in physics (if not in entire science) at that time. Subsequently, this record was broken after the discovery of high Tc super-conductivity" (arxiv.org/abs/hep-ph/9910468)
"Our whole problem is to make the mistakes as fast as possible- my part- and recognize them -your part" (John Archibald Wheeler in "A septet of Sibyls: aids in the search for truth")
ReplyDeleteThank you Jester to help the BSM community to do both with such a nice spirit (o_~)