Friday, 23 September 2011

The Phantom of OPERA

Those working in science are accustomed to receiving emails starting with "dear sir/madam, please look at the attached file where I'm proving einstein theory wrong". This time it's a tad more serious because the message comes from a genuine scientific collaboration... As everyone knows by now, the OPERA collaboration announced that muon neutrinos produced at CERN arrive to a detector 700 kilometers away in Gran Sasso about 60 nanoseconds earlier than expected if they traveled at the speed of light (incidentally, trains traveling the same route arrive always late). The paper is available on arXiv, and the video from the CERN seminar is here.

OPERA is an experiment who has had some bad luck in the past. Its original goal was to study neutrino oscillations by detecting the appearance of tau neutrinos in a beam of muon neutrinos. However due to construction delays their results arrive too late to have any impact on measuring the neutrino masses and mixing; other experiments have in the meantime achieved a much better sensitivity to to these parameters. Moreover, the "atmospheric" neutrino mass difference, which enters the probability of a muon neutrino oscillating into a tau one, turned out to be at the lower end of the window allowed when OPERA was being planned. As a consequence, a fairly small number of oscillation events is predicted to occur on the way to Italy, leading to the expectation of about 1-2 tau events to be recorded during experiment's lifetime (they were lucky to already get 1). However they will not walk off the stage quietly. What was meant to be a little side analysis returned the result that neutrinos travel faster than light, confounding the physics community and wreaking havoc in the mainstream media.

I'm not very original in thinking that the result is almost certainly wrong. The main experimental reason, already discussed on blogs, is the observation of neutrinos from the supernova SN1987A. Back in 1987, three different experiments detected a burst of neutrinos, all arriving within 15 seconds and 2-3 hours before the visible light (which agrees with models of supernova explosion). On the other hand, if neutrinos traveled as fast as OPERA claims, they should have arrived years earlier. Note that the argument that OPERA is dealing with muon neutrinos while supernovae produce electron ones is not valid: electron neutrinos have enough time to oscillate to other flavors on the way from the Large Magellanic Clouds. One way to reconcile OPERA with SN1987A would be to invoke a strong energy dependence of the neutrino speed (it should be steeper than Energy^2), since the detected supernova neutrinos are in the 5-40 MeV range, while the energy of the CERN-to-Gran-Sasso beam is 20 GeV on average. However OPERA does not observe any significant energy dependence of the neutrino speed, so that is an unlikely explanation either.

From the point of view of theory the chances that the OPERA result being true are no better as there is no sensible model of tachyonic neutrinos. At the same time, we've been observing neutrinos in numerous experiments and in various different settings, for example in beta decay, from terrestrial nuclear reactors, from the Sun, in colliders as missing energy, etc. Each time they seem to behave like ordinary fermions obeying all rules of the local Lorentz invariant quantum field theory.

We should weigh this evidence against the analysis of OPERA which does not appear rock solid. Recall that OPERA was conceived to observe tau neutrino appearance, not to measure the neutrino speed, and indeed there are certain aspects of the experimental set-up that call for caution. The most worrying is the fact that OPERA has no way to know the precise production time of a neutrino it detects, as it could be produced anytime during a 10 microsecond long proton pulse that creates the neutrinos at CERN. To go around this problem they need a statistical approach. Namely, they measure the time delay of the neutrino arrival in Gran Sasso with respect to the start of the proton pulse at CERN. Then they fit the time distribution to the templates based on the measured shape of the proton pulse, assuming various hypotheses about the neutrino travel time. In this manner they find that the best fit is for the travel time is 60 nanoseconds smaller than what one would expect if the neutrinos traveled at the speed of light. However, one could easily imagine that the systematic errors of this procedure have been underestimated, for example, the shape of the rise and the fall-off of the proton pulse have been inaccurately measured. OPERA does a very good job arguing that the distance from CERN to Gran Sasso can be determined to 20 cm precision, or that synchronizing the clocks in these two labs is possible to 1 nanosecond precision, but the systematic uncertainties on the shape of the proton pulse are not carefully addressed (and, during the seminar at CERN, the questions concerning this issue were the ones that confounded the speaker the most).

So what's next? Fortunately OPERA appears to be open for discussion and scrutiny, thus the issue of systematic uncertainties should be resolved in the near future. Simultaneously, the MINOS collaboration should be able to repeat the measurement with similar if no better precision, and I'm sure they're already sharpening their screwdrivers. In the longer timescale, OPERA could try to optimize the experimental setting for the velocity measurement. For example, they might install a near detector on the CERN site (where there should be no effect if the current observation is due to neutrinos traveling faster than light, or there should be a similar effect if there is an unaccounted for systematic error in the production time). Or they could use shorter proton pulses, so that the neutrino production time can be determined without statistical gymnastics (it appears feasible - the LHC currently works with 5 ns bunches). I bet, my private level of confidence being 6 sigma, that the future checks will demonstrate that neutrinos are not superluminal... in the end the character from the original book turned out to be 100% human. But, of course, the ultimate verdict belongs not to our preconceptions but to experiment.


Kea said...

The OPERA energy dependence is all in the GeV range, so the supernovae results may well be irrelevant here. From their paper, it looks like the two point null energy dependence test is used as a consistency test for systematic errors.

Kea said...

Moreover, a (large scale) energy dependence for neutrino properties would not be unexpected.

Cyberax said...

Well, since we're moving into faster-than-light territory - there might as well be dependence on distance between observer and emitter.

Maybe as some kind of causality preservation law.

Anonymous said...

I give this latest media event one week - tops.

Then the researchers will say: "we warned you".

The media will move on to the next false-positive fad.

Those pumping up the hype will rationalize their bias, or go silent, having learned nothing.

Pseudo-reality does indeed prevail.

Ralph said...

In the arxiv preprint they talk about the time structure of the proton pulse, but as far as I can see not the spatial shape.

If the start of the pulse is slightly better focused than the end of the pulse, that would bias the result.

My bet is that something (e.g., the pulsed kicker magnet extracting from the SPS, or internal interactions in the proton beam) is causing the end of the pulse to get frayed....

wolfgang said...

I think your blog post is one of the better summaries of the OPERA result so far.
But I begin to wonder why several previous experiments (including MINOS) had this bias towards m^2 < 0 even if statistically not really significant.

wolfgang said...

By the way, I am surprised nobody has mentioned the Scharnhorst effect yet.
Actually neutrinos should travel slightly faster than photons according to standard QED.
But I think it would be too small to explain the OPERA result.

Heather Logan said...

Wolfgang, I checked out your link and the answer is no: ordinary quantum electrodynamics certainly does NOT predict that photons should travel slightly slower than neutrinos due to vacuum polarization effects. Electromagnetic gauge invariance ensures that the photon remains massless after radiative corrections (Ward identity), and the special-relativity-based structure of QED ensures that massless particles travel at the speed of light. Neutrinos, having a nonzero mass, can be stopped, and then boosted using a standard Lorentz boost to whatever energy you like---being massive particles, special relativity dictates that they will then be travelling slower than the speed of light.

If the OPERA result is correct, ordinary special-relativity-based field theory would need tweaking.

wolfgang said...

>> gauge invariance ensures that the photon remains massless

I did not say the photon acquires a mass.
But if the Scharnhorst effect is correct (and the several papers about it have never been shown to be incorrect as far as I know) then the 'true' or 'bare' speed of light is slightly higher than the speed of light in normal vacuum.

wolfgang said...

sorry for posting twice, but I forgot to mention this "...then the 'true' or 'bare' speed of light must be slightly higher than the speed of light in normal vacuum otherwise one would get acausality using Casimir plates".

Anonymous said...

The quoted 988ns travel time can not be right - just enough for about 300m it is certainly too little for the CERN-GS distance. Perhaps a typo or a kind of an offset against some reference time?

Jester said...

Thx, silly me, 988 ns they quote is indeed the delay with respect to some reference value (the result of a less precise blind analysis from 2006, if i understand correctly). Corrected.

Brian Dorney said...

You mentioned the fact that they could use shorter proton pulses. I agree with this sentiment.

However, you stated that the LHC currently operates at 5ns bunch crossings. This is incorrect, we currently operate with 50ns bunch crossings.

Additionally, the neutrino beamline used in the OPERA experiment was created from a proton beam injected from the SPS not the LHC.

CERN's accelerator complex is not currently equipped to take a proton beam from the LHC Ring for use by the OPERA experiment.

Tony Smith said...

Ralph said "... If the start of the pulse is slightly better focused than the end of the pulse, that would bias the result.
My bet is that something (e.g., the pulsed kicker magnet extracting from the SPS, or internal interactions in the proton beam) is causing the end of the pulse to get frayed. ...".

From screenshots from the OPERA video that
I put on the web at
it seems that the proton PDF red line translation that gives the 60.7 ns delta t
is primarily needed for the trailing edge (end of pulse) rather than the leading edge,
that seems to confirm what Ralph said.

I would like to see 1-sigma and 2-sigma Brazil band uncertainty bars for the red line.


Anonymous said...

you said "there is no sensible model of tachyonic neutrinos;" however, a proposal of tachyonic neutrinos was made in 1985 by Chodos et al. ( Moreover, oscillation signals of the type in MiniBooNE and all other compelling evidence of neutrino oscillations are consistent with models based on Lorentz invariance violation, which also allow tachyonic neutrinos.

E Shumard said...

What are the vertical error bars on the number of observed neutrino events? Isn't this just counting? And why are the error bars bigger for a larger number of events? Presumably this is not the number of observed events but the estimated number of events accounting for the dead time of the detector after each event or the chance that multiple events are occurring so close in time that they are not cleanly separated? Could these effects introduce a bias? It looks like the average event rate in the pulse is on the order of one per ns.

The neutrino events are binned in 150 ns chunks and the anomaly is 60 ns, about 1/2 of a bin. That can't be it, right? Is there a fitting bias on the pulse edges when a bin should be weighted away from the center but isn't?

E Shumard said...

The error bars are just +/- sqrt(number of events), i.e., just indicating the expected sampling error.

Cedric said...

@Brian: you should distinguish between the bunch spacing in the LHC (now it's 50ns as you said) and the bunch length which is different.

To clarify about LHC and SPS: the LHC beam structure is created in the PS, so the LHC beam accelerated from 26 GeV/c to 450 GeV/c in the SPS could be extracted, in principle, at 400 GeV/c from the SPS to the Opera target.

But these beams are really different in term of intensity: the CNGS beam is the highest intesity beam produced at CERN, while the LHC beam has the highest brightness (ratio between the intensity and the size).

Jester said...

Right, the LHC bunch spacing is 50ns (going down to 25ns in the future), but the bunch itself is much shorter, 5ns if I'm not mistaken. I'm aware there is no button one can press to extract 5ns pulses from SPS and send them to Gran Sasso. I just meant that creating short pulses is feasible and desirable in the context of measuring the neutrino speed. The reason OPERA is using 100μs pulses is because their priority is (was?) intensity, so as to maximize the chances of seeing one more stupid tau neutrino appearance.

Andrew Oh-Willeke said...

The speed of the neutrino should be on the order of (1-10^-19)c at these energies, and according to my doing it in my head while sitting in the bath calculations on the order of (1-10^-6)c in the 1987 supernovae (although we know much less about what should be produced at what stage of the event there with any great accuracy). To the level of precision in the OPERA experiment, those are values indistinguishable from c even though it is slightly lower than c. There should be no energy dependence of neutrino time at that level of precision in the OPERA data if it isn't a tachyon.

Kea said...

I have decided that the supernova results are irrelevsnt, because we do not know the initial delta t (c. 3 hours) IN OUR FRAME.

Ulla said...

The photon interacts more than a neutrino. What happen with the photon from SN 1987a? This must be an anisotropic filtering, so the absolute speed is that of neutrinos, not photons. Note, time is energetic.

If we adjust acc to this, what happen with the expanding universe? The practical effect would be the same as a varying c?

Light/radiation is a SIGNAL, not the source. said...

Super-luminal neutrinos and many other anomalies not taken seriously by the mainstream hitherto can be understood easily in TGD framework, where sub-manifold geometry replaces abstract manifold geometry so that one must distinguish between maximal signal velocity along given space-time sheet and in imbedding space M^4xCP_2.

The latter gives absolute upper bound for the former. There is no need to break causality. The effect could be studied also for other relativistic particles, say electrons.

Even electric circuits could be used and the old strange results by posting and its predecessor. The model can be modified so that it applies in braney M-theory and probably the representatives of M-theory hegemony will "discover" the explanation within few days.

Anonymous said...

Several people have mentioned that geodesic distance and linear distance are not the same thing.

If the OPERA people used the longer geodesic distance as the relevant distance, and the neutrinos took the shorter linear distance because they are quite a bit more clever than the particle physicists, then the anomaly is conveniently vaporized.

Has OPERA defined the distance it used as the reference distance?


Arun said...

Wolfgang, as per the link you provided, the correction to the speed of light induced by Casimir plates goes like a^-4 where a is their separation. So in the vacuum, where a -> infinity, the effect is irrelevant.

chimpanzee said...

Hi Adam, I found your CERN contact page, but when I click on "email"..nothing happens. Can you contact me by clicking on my ID? Need to ask you Physics questions, see below.

I'm in touch with 3 OPERA co-authors about some potential (non-obvious) systematic errors, along with a FPGA specialist (Caltech PhD). The TT (Target Tracker)/FPGA uses a PLL controlled clock (coarse & fine), "driven" by an external 20 Mhz master clock. The FPGA "trigger delay" has been qualified with "quantization noise". However, I don't see QN mentioned elsewhere in the signal chain (red-flag?) It's a well known phenomena in my field (Computer Graphics, Digital Signal Processing, etc) that cumulative effects of roundoff/truncation (contributing to QN) can create SPECTACULAR errors. I.e., hidden surfaces becoming VISIBLE!!

My expertise is in Statistical Models (emphasis on rigorous mathematical foundation). Matt Strassler has brought up the "quadrature" method of sigma combine (where signals are combined), as a potential problem. It's based on the famous sum of independent Gaussian RVs (random variables), which may not apply (!)

Another problem is software/firmware used in micro-controllers (FPGA). They can be quirky/imperfect, & foul things up.

All the above can add to uncertainty ("sigma"), which can foul up the measurement of Delta-T (v_c - v_neutrino), what their MLE (Maximum Likelihood Estimator) is trying to determine. Tentatively, 60 nsec, 6 sigma result.

E Shumard said...

From this paper:

OPERA saw 319 events for 8 * 10^17 p.o.t. (protons on target) and the design number of p.o.t. per pulse is 2.4 * 10^13.
This gives about 10^-2 events per pulse. This was running the accelerator at 55% and 70% of design intensity so the number of events per pulse should be more when running at design intensity.

10^-2 events per pulse means that about 10^-2 triggers will have a second neutrino event. If the detector is blind during the second event (readout of electronics, sensor recovery etc) then it will be missed. This produces a slight selection bias for earlier events. If the dead time is greater than the pulse width the average recorded event time will be biased earlier by something like 1/2 * events per pulse * pulse width = 50 ns for 10^-2 events per pulse.

Brian Dorney said...


My mistake, I misread the context of the OP's statement!

Thank you for pointing this out and correcting my false impression.

Andrew Oh-Willeke said...

There is some literature on Lorentz breaking by neutrinos prior to the OPERA results, but the paper in the context of the larger issue of neutrino condensates didn't quantify the effect predicted to the point of a numerical phenomenological prediction.

Anonymous said...

Kea wrong as usual.

Chris Austin said...

@E Shumard, that's an extremely interesting point, that I hope the experimenters will look into.

To put it another way, you're pointing out that for about a fraction 0.01 of the triggers, there will be a second neutrino event from the same pulse, after the one the detector triggered on, so on average, in the second half of the pulse. If the detector fails to trigger on that second neutrino event, the average arrival time will be biased earlier by about 0.01 times half the 10 microsecond pulse length, which is 50 ns.

Chris Austin said...

@E Shumard, From page 8 of arXiv:1109.4897, the analysis used 16111 events corresponding to about 10^{20} protons on target. So for the design intensity of 2.4 x 10^{13} p.o.t. per extraction, they had about 0.004 events per extraction. That would bias the arrival time earlier by about 20 ns, if they failed to detect second events in the (0.004)^2 of extractions where they occur.

Chris Austin said...

@E Shumard, I think the bias would be a quarter of the pulse length times the number of events per pulse, because the average position of the second event is 3/4 of the way through the pulse instead of 1/2 the way through. So for 0.004 events per extraction it would be 10 ns.

For 2-event pulses where the first event is a fraction x of the way through the pulse, the average position of the second event is (1/2)(1 + x). Integrating this w.r.t. x from 0 to 1 gives 3/4.

Anonymous said...

Finally one right theory paper comes out 1109.6562

Chris Austin said...

@E Shumard, Sorry, your factor of 1/2 was correct, because for 2-event pulses, the average position of the first event is 1/4 of the way through the pulse instead of 1/2 the way through. So if they only observe the first event, the bias is 1/4 from the event they observe, plus another 1/4 from the unobserved second event.

Chris Austin said...

@E Shumard, Sorry again, I think the bias would be a third of the pulse length times the number of events per pulse. Thus 13 ns for 0.004 events per extraction.

For a 2-event pulse, the average position of the first event, in units of the pulse length, is \int_0^1 dy \int_0^1 dx min(x,y) = 2 \int_0^1 dy \int_0^y dx x = 1/3, and the average position of the second event is \int_0^1 dy \int_0^1 dx max(x,y) = 2 \int_0^1 dy \int_0^y dx y = 2/3.

So if they only observe the first event, the bias is 1/6 from the event they observe, plus another 1/6 from the unobserved second event.

@Jester, Suddenly it's night time on your blog! The new dark background makes it difficult to read.

Chris Austin said...

Sorry Jester, now I understand we're in black for the Tevatron's funeral.

MichelT said...

I agree with anonymous that
1109.6562 says it all.

Anonymous said...

Stupid question about 1109.6562 (Cohen Glashow): what's the problem if the superluminal neutrino beam is shifted in energy below ~12GeV?
Is this energy below some Opera sensitivity (energy treshold of 12 GeV)? Or what?