lundi 4 décembre 2017

Cosmic rays are as mischievous as the Monkey King / 宇宙射线像孙悟空一样恶作剧

A tribute to current Chinese science and technology and a wink to its literary classic The Journey to the West

Thanks to the Chinese satellite Wukong (aka Monkey King), also known as ...
The DArk Matter Particle Explorer (DAMPE), a high energy cosmic ray and γ-ray detector in space, has recently reported the new measurement of the total electron plus positron flux between 25 GeV and 4.6 TeV. A spectral softening at ∼0.9 TeV and a tentative peak at ∼1.4 TeV have been reported. We study the physical implications of the DAMPE data in this work... Both the astrophysical models and the exotic DM annihilation/decay scenarios are examined. Our findings are summarized as follows. 
The spectral softening at ∼ 0.9 TeV suggests a cutoff (or break) of the background electron spectrum, which is expected to be due to either the discretness of cosmic rays (CR) source distributions in both space and time, or the maximum energies of electron acceleration at the sources. The DAMPE data enables a much improved determination of the cutoff energy of the background electron spectrum, which is about 3 TeV assuming an exponential form, compared with the pre-DAMPE data. 
Both the annihilation and decay scenarios of the simplified DM models to account for the sub-TeV electron/positron excesses are severely constrained by the CMB and/or γ-ray observations. Additional tuning of such models, through e.g., velocity-dependent annihilation, is required to reconcile with those constraints
The tentative peak at ∼ 1.4 TeV suggested by DAMPE implies that the sources should be close enough to the Earth (.0.3 kpc) and inject nearly monochromatic electrons into the Galaxy. We find that the cold and ultra-relativistic e⁺e wind from pulsars is a possible source of such a structure. Our analysis further shows that the pulsar should be middle-aged, relatively slowlyrotated, mildly magnetized, and isolate in a density cavity (e.g., the local bubble).
• An alternative explanation of the peak is the DM annihilation in a nearby clump or a local density enhanced region. The distance of the clump or size of the overdensity region needs to be .0.3 kpc. The required parameters of the DM clump or over-density are relatively extreme compared with that of numerical simulations, if the annihilation cross section is assumed to be 3×10⁻²⁶ cm³ s⁻¹ . Specifically, a DM clump as massive as 10⁷−10 M or a local density enhancement of 17 − 35 times of the canonical local density is required to fit the data if the annihilation product is a pair of e⁺e . Moderate enhancement of the annihilation cross section would be helpful to relax the tension between the model requirement and the N-body simulations of the CDM structure formation. The DM clump model or local density enhancement model is found to be consistent with the Fermi-LAT γ-ray observations. 
The expected anisotropies from either the pulsar model or the DM clump model are consistent with the recent measurements by Fermi-LAT. Future observations by e.g., CTA, will be able to detect such anisotropies and test different models. 
DAMPE will keep on operating for a few more years. More precise measurements of the total e⁺+e spectrum extending to higher energies are available in the near future. Whether there are more structures in the high energy window, which can critically distinguish the pulsar model from the DM one, is particularly interesting. With more and more precise measurements, we expect to significantly improve our understandings of the origin of CR electrons.

The total e⁺+e fluxes (right) for a model with two nearby pulsars

Fluxes of the total e⁺+e  , from the sum of the continuous background and the DM annihilation from a nearby clump. This panel is for DM annihilation into all flavor leptons with universal couplings. Three distances of the clump, as labelled in the plot, are considered.


(Submitted on 29 Nov 2017)

update 12/06/2017:

If the spectral feature comes from dark matter what can we learn from the former about the latter ?

We performed a model-independent analysis of particle dark matter explanations of the peak in the DAMPE electron spectrum and whether they can simultaneously satisfy constraints from other DM searches. We assumed that the signal originated from DM annihilation in a nearby subhalo with an enhanced density of DM. To account for the inevitable energy loss, we assumed a DM mass of about 1.5 TeV, which is slightly greater than the location of the observed peak. Rather than working in a specific UV-complete model, we investigated all renormalizable interactions between SM leptons, DM of spin 0 and 1/2, and mediators of spin 0 and 1... 
We found that 10 of 20 possible combinations of operators are helicity or velocity suppressed and cannot explain the DAMPE signal. Of the remaining combinations, PandaX strongly constrains the unsuppressed scattering cross sections in three models and LEP strongly constrains the mass of the mediator in the other 7. The remaining candidates are (1) a spin 0 mediator coupled to scalar DM, (2) a spin 0 mediator pseudoscalar coupled to fermionic DM, and (3) a spin 1 mediator vector coupled to Dirac DM. LEP constraints on four-fermion operators force the mediator mass to be heavy, ~2 TeV, in all of these scenarios.
(Submitted on 30 Nov 2017 (v1), last revised 5 Dec 2017 (this version, v2))


O cosmic rays! from where art thou? 

Nearby sources may contribute to cosmic-ray electron (CRE) structures at high energies. Recently, the first DAMPE results on the CRE flux hinted at a narrow excess at energy ~1.4 TeV. We show that in general a spectral structure with a narrow width appears in two scenarios: I) "Spectrum broadening" for the continuous sources with a delta-function-like injection spectrum. In this scenario, a finite width can develop after propagation through the Galaxy, which can reveal the distance of the source. Well-motivated sources include mini-spikes and subhalos formed by dark matter (DM) particles χs which annihilate directly into e+e- pairs. II) "Phase-space shrinking" for burst-like sources with a power-law-like injection spectrum. The spectrum after propagation can shrink at a cooling-related cutoff energy and form a sharp spectral peak. The peak can be more prominent due to the energy-dependent diffusion. In this scenario, the width of the excess constrains both the power index and the distance of the source. Possible such sources are pulsar wind nebulae (PWNe) and supernova remnants (SNRs). We analysis the DAMPE excess and find that the continuous DM sources should be fairly close within ~0.3 kpc, and the annihilation cross sections are close to the thermal value. For the burst-like source, the narrow width of the excess suggests that the injection spectrum must be hard with power index significantly less than two, the distance is within ~(3-4) kpc, and the age of the source is ~0.16 Myr. In both scenarios, large anisotropies in the CRE flux are predicted. We identify possible candidates of mini-spike (PWN) sources in the current Fermi-LAT 3FGL (ATNF) catalog. The diffuse gamma-rays from these sources can be well below the Galactic diffuse gamma-ray backgrounds and less constrained by the Ferm-LAT data, if they are located at the low Galactic latitude regions... 
The current experiments have entered the multi-TeV region where the CRE spectrum is unlikely to be smooth. We have proposed generic scenarios of the origins of the CRE structures and analysed the nature of sources responsible for the possible DAMPE excess. The predictions of these scenarios are highly testable in the near future with more accurate data.
(Submitted on 30 Nov 2017)

lundi 16 octobre 2017

Which neutron star merger with gold-plated nuclear waste mushroom at 130 million light years?

[A new exciting, another boring usual] astrophysical event?


The discovery, announced Monday at a news conference and in scientific reports written by some 3,500 researchers, solves a long-standing mystery about the origin of these heavy elements — which are found in everything from wedding rings to cellphones to nuclear weapons. 
It's also a dramatic demonstration of how astrophysics is being transformed by humanity's newfound ability to detect gravitational waves, ripples in the fabric of space-time that are created when massive objects spin around each other and finally collide. 
"It's so beautiful. It's so beautiful it makes me want to cry. It's the fulfillment of dozens, hundreds, thousands of people's efforts, but it's also the fulfillment of an idea suddenly becoming real," says Peter Saulson of Syracuse University, who has spent more than three decades working on the detection of gravitational waves...
What all the images showed was a brand-new point of light that started out blueish and then faded to red. This didn't completely match what theorists thought colliding neutron stars should look like — but it was all close enough that Daniel Kasen, a theoretical astrophysicist at the University of California, Berkeley, found the whole experience a little weird. 
"Even though this was an event that had never been seen before in human history, what it looked like was deeply familiar because it resembled very closely the predictions we had been making," Kasen says. "Before these observations, what happened when two neutron stars merged was basically just a figment of theorists' imaginations and their computer simulations." 
He spent late nights watching the data come in and says the colliding stars spewed out a big cloud of debris. 
"That debris is strange stuff. It's gold and platinum, but it's mixed in with what you'd call just regular radioactive waste, and there's this big radioactive waste cloud that just starts mushrooming out from the merger site," Kasen says. "It starts out small, about the size of a small city, but it's moving so fast — a few tenths of the speed of light — that after a day it's a cloud the size of the solar system." 
According to his estimates, this neutron star collision produced around 200 Earth masses of pure gold, and maybe 500 Earth masses of platinum. "It's a ridiculously huge amount on human scales," Kasen says...
October 16, 201710:01 AM ET


LIGO, with the world’s first two gravitational observatories, detected the waves from two merging neutron stars, 130 million light years from Earth, on August 17th... VIRGO, with the third detector, allows scientists to triangulate and determine roughly where mergers have occurred. They saw only a very weak signal, but that was extremely important, because it told the scientists that the merger must have occurred in a small region of the sky where VIRGO has a relative blind spot...
The merger was detected for more than a full minute… to be compared with black holes whose mergers can be detected for less than a second. It’s not exactly clear yet what happened at the end, however! Did the merged neutron stars form a black hole or a neutron star? The jury is out.
If there’s anything disappointing about this news, it’s this: almost everything that was observed by all these different experiments was predicted in advance. Sometimes it’s more important and useful when some of your predictions fail completely, because then you realize how much you have to learn. Apparently our understanding of gravity, of neutron stars, and of their mergers, and of all sorts of sources of electromagnetic radiation that are produced in those merges, is even better than we might have thought. But fortunately there are a few new puzzles. The X-rays were late; the gamma rays were dim…



jeudi 31 août 2017

Ondes gravitationnelles et résonances d'orages africains/ Gravitational wave signals and unexpectedly strong Schumann resonance transients correlated noise

Raphaël Enthoven,Jacques Perry-salkow

(The validation of) A great discovery requires a genuinely independent analysis of data


To date, the LIGO collaboration has detected three gravitational wave (GW) events appearing in both its Hanford and Livingston detectors. In this article we reexamine the LIGO data with regard to correlations between the two detectors. With special focus on GW150914, we report correlations in the detector noise which, at the time of the event, happen to be maximized for the same time lag as that found for the event itself. Specifically, we analyze correlations in the calibration lines in the vicinity of 35 Hz as well as the residual noise in the data after subtraction of the best-fit theoretical templates. The residual noise for the other two events, GW151226 and GW170104, exhibits similar behavior. A clear distinction between signal and noise therefore remains to be established in order to determine the contribution of gravitational waves to the detected signals

(Submitted on 13 Jun 2017 (v1), last revised 9 Aug 2017 (this version, v2))


A debate about how to sift the astrophysical wheat from the terrestrial chaff


Recent claims in a preprint by Creswell et al. of puzzling correlations in LIGO data have broadened interest in understanding the publicly available LIGO data around the times of the detected gravitational-wave events. We see that the features presented in Creswell et al. arose from misunderstandings of public data products. The LIGO Scientific Collaboration and Virgo Collaboration (LVC) have full confidence in our published results, and we are preparing a paper in which we will provide more details about LIGO detector noise properties and the data analysis techniques used by the LVC to detect gravitational-wave signals and infer their waveforms.

News from LIGO Scientific Collaboration
undated (between 7 July and 1 August 2017)
In our view, if we are to conclude reliably that this signal is due to a genuine astrophysical event, apart from chance-correlations, there should be no correlation between the "residual" time records from LIGO's two detectors in Hanford and Livingston. The residual records are defined as the difference between the cleaned records and the best GW template found by LIGO. Residual records should thus be dominated by noise, and they should show no correlations between Hanford and Livingston. Our investigation revealed that these residuals are, in fact, strongly correlated. Moreover, the time delay for these correlations coincides with the 6.9 ms time delay found for the putative GW signal itself...
During a two-week period at the beginning of August, we had a number of "unofficial" seminars and informal discussions with colleagues participating in the LIGO collaboration... Given the media hype surrounding our recent publication, these meetings began with some measure of scepticism on both sides. The atmosphere improved dramatically as our meetings progressed. 
The focus of these meetings was on the detailed presentation and lively critical discussion of the data analysis methods adopted by the two groups. While there was unofficial agreement on a number of important topics - such as the desirability of better public access to LIGO data and codes - we emphasize that no consensus view emerged on fundamental issues related to data analysis and interpretation.
In view of unsubstantiated claims of errors in our calculations, we appreciated the opportunity to go through our respective codes together - line by line when necessary - until agreement was reached. This check did not lead to revisions in the results of calculations reported in versions 1 and 2 of arXiv:1706.04191 or in the version of our paper published in JCAP. It did result in changes to the codes used by our visitors.
There are a number of in-principle issues on which we disagree with LIGO's approach. Given the importance of LIGO's claims, we believe that it is essential to establish the correlation between Hanford and Livingston signals and to determine the shape of these signals without employing templates. Before such comparisons can be made, the quality of data cleaning (which necessarily includes the removal of non-Gaussian and non-stationary instrumental "foreground" effects) must be demonstrated by showing that the residuals consist only of uncorrelated Gaussian noise. We believe that suitable cleaning is a mandatory prerequisite for any meaningful comparisons with specific astrophysical models of GW events. This is why we are concerned, for example, about the pronounced "phase lock" in the LIGO data.
James Creswell, Sebastian von Hausegger, Andrew D. Jackson, Hao Liu, Pavel Naselsky
August 21, 2017


Disentangling the man-made detectors from the Earth-shaped one


As the LIGO detectors are extremely sensitive instruments they are prone to many sources of noise that need to be identified and removed from the data. An impressive amount of efforts were undertaken by the LIGO collaboration to ensure that GW150914 signal was really the first detection of gravitational waves with all transient noise backgrounds being under a good control [4, 5, 6]. 

It was claimed, however, in a recent publication [7] that the residual noise of the GW150914 event in LIGO’s two widely separated detectors exhibit correlations that are maximized for the same 7 ms time lag as that found for the gravitational-wave signal itself. Thus questions on the integrity and reliability of the gravitational waves detection were raised and informally discussed [8, 9]. It seems at present time it is not quite clear whether there is something unexplained in LIGO noise that may be of genuine interest. It was argued that even assuming that the claims of [7] about correlated noise are true, it would not affect the 5-sigma confidence associated with GW0150914 [8]. Nevertheless, in this case it will be interesting to find out the origin of this correlated noise.
Correlated magnetic fields from Schumann resonances constitute a well known potential source of correlated noise in gravitational waves detectors [11, 12, 13]... Schumann resonances are global electromagnetic resonances in the Earthionosphere cavity [14, 15]. The electromagnetic waves in the extremely low frequencies (ELF) range (3Hz to 3 kHz) are mostly confined in this spherical cavity and their propagation is characterized by very low attenuation which in the 5 Hz to 60 Hz frequency range is of the order of 0.5-1 db/Mm. Schumann resonances are eigenfrequencies of the Earth-ionosphere cavity. They are constantly excited by lightning discharges around the globe. While individual lightning signals below 100 Hz are very weak, thanks to the very low attenuation, related ELF electromagnetic waves can be propagated a number of times around the globe, constructively interfere for wavelengths comparable with the Earth’s circumference and create standing waves in the cavity.

Note that there exists some day-night variation of the resonance frequencies, and some catastrophic events, like a nuclear explosion, simultaneously lower all the resonance frequencies by about 0.5 Hz due to lowering of the effective ionosphere height [16]. Interestingly, frequency decrease of comparable magnitude of the first Schumann resonance, caused by the extremely intense cosmic gamma-ray flare, was reported in [17]. Usually eight distinct Schumann resonances are reliably detected in the frequency range from 7 Hz to 52 Hz. However five more were detected thanks to particularly intense lightning discharges, thus extending the frequency range up to 90 Hz [18].

...  For short duration gravitationalwave transients, like the three gravitational-waves signals observed by LIGO, Schumann resonances are not considered as significant noise sources because the magnetic field amplitudes induced by even strong remote lightning strikes usually are of the order of a picotesla, too small to produce strong signals in the LIGO gravitational-wave channel [4].

Interestingly enough, the Schumann resonances make the Earth a natural gravitational-wave detector, albeit not very sensitive [20]. As the Earth is positively charged with respect to ionosphere, a static electric field, the so-called fair weather field is present in the earth-ionosphere cavity. In the presence of this background electric field, the infalling gravitational wave of suitable frequency resonantly excites the Schumann eigenmodes, most effectively the second Schumann resonance [20]. Unfortunately, it is not practical to turn Earth into a gravitational-wave detector. Because of the weakness of the fair weather field (about 100 V/m) and low value of the quality factor (from 2 to 6) of the Earth-ionosphere resonant cavity, the sensitivity of such detector will be many orders of magnitude smaller than the sensitivity of the modern gravitational-wave detectors

However, a recent study of short duration magnetic field transients that were coincident in low-noise magnetometers in Poland and Colorado revealed that there was about 2.3 coincident events per day where the amplitudes of the pulses exceeded 200 pT, strong enough to induce a gravitational-wave like signal in the LIGO gravitational-wave channel of the same amplitude as in the GW150914 event [21]...

The main source of the Schumann ELF waves are negative cloud-toground lightning discharges with the typical charge moment change of about 6 Ckm. On Earth, storm cells, mostly in the tropics, generate about 50 such discharges per second.

The so-called Q-bursts are more strong positive cloud-to-ground atmospheric discharges with charge moment changes of order of 1000 Ckm. ELF pulses excited by Q-bursts propagate around the world. At very far distances only the low frequency components of the ELF pulse will be clearly visible, because the higher frequency components experience more attenuation than the lower frequency components...

In [22] Earth’s lightning hotspots are revealed in detail using 16 years of space-based Lightning Imaging Sensor observations. Information about locations of these lightning hotspots allows us to calculate time lags between arrivals of the ELF transients from these locations to the LIGO-Livingston (latitude 30.563◦ , longitude −90.774◦ ) and LIGO-Hanford (latitude 46.455◦ , longitude −119.408◦ ) gravitational-wave detectors...

We have taken Earth’s lightning hotspots from [22] with lightning flash rate densities more than about 100 fl km−2 yr−1 and calculated the expected time lags between ELF transients arrivals from these locations to the LIGO detectors... Note that the observed group velocity for short ELF field transients depends on the upper frequency limit of the receiver [21]. For the magnetometers used in [21] this frequency limit was 300 Hz corresponding to the quoted group velocity of about 0.88c. For the LIGO detectors the coupling of magnetic field to differential arm motion decreases by an order of magnitude for 30 Hz compared to 10 Hz [4]. Thus for the LIGO detectors, as the ELF transients receivers, the more appropriate upper frequency limit is about 30 Hz, not 300 Hz. According to (2), low frequencies propagate with smaller velocities 0.75c-0.8c. Therefore the inferred time lags in the Table1 might be underestimated by about 15%...

If the strong lightnings and Q-bursts indeed contribute to the LIGO detectors correlated noise then the distribution of lightning hotspots around the globe can lead to some regularities in this correlated noise. Namely, extremely low frequency transients due to lightnings in Africa will be characterized by 5-7 ms time lags between the LIGO-Hanford and LIGO-Livingston detectors. Asian lightnings lead to time lags which have about the same magnitude but the opposite sign. Lightnings in North and South Americas should lead positive time lags of about 11-13 ms, greater than the light propagation time between the LIGO-Hanford and LIGO-Livingston detectors. 

(Submitted on 27 Jul 2017)

mercredi 22 février 2017

{Bohmian mechanics, is} [subtle, malicious] (?)

Here is my post consisting as usual in quotes from some scientific articles fully available online, underlining (or emphasizing with a bold font) selected parts in order to sketch a draft response to the question in its title. This time, I was mostly inspired by reading this post at another blog named Elliptic Composability.


Inconclusive Bohmian positions in the macroscopic way ...
Bohmian mechanics differs deeply from standard quantum mechanics. In particular, in Bohmian mechanics particles, here called Bohmian particles, follow continuous trajectories; hence in Bohmian mechanics there is a natural concept of time-correlation for particles’ positions. This led M. Correggi and G. Morchio [1] and more recently Kiukas and Werner [2] to conclude that Bohmian mechanics “can’t violate any Bell inequality”, hence is disproved by experiments. However, the Bohmian community maintains its claim that Bohmian mechanics makes the same predictions as standard quantum mechanics (at least as long as only position measurements are considered, arguing that, at the end of the day, all measurements result in position measurement, e.g. pointer’s positions).  
Here we clarify this debate. First, we recall why two-time position correlation is at a tension with Bell inequality violation. Next, we show that this is actually not at odd with standard quantum mechanics because of some subtleties. For this purpose we do not go for full generality, but illustrate our point on an explicit and rather simple example based on a two-particle interferometers, partly already experimentally demonstrated and certainly entirely experimentally feasible (with photons, but also feasible at the cost of additional technical complications with massive particles). The subtleties are illustrates by explicitly coupling the particles to macroscopic systems, called pointers, that measure the particles’ positions. Finally, we raise questions about Bohmian positions, about macroscopic systems and about the large difference in appreciation of Bohmian mechanics by the philosophers and physicists communities... 
Part of the attraction of Bohmian mechanics lies then in the assumption that • Assumption H : Position measurements merely reveal in which (spatially separated and non-overlapping) mode the Bohmian particle actually is.   
A Bohmian particle and its pilot wave arrive on a Beam-Splitter (BS) from the left in mode “in”. The pilot wave emerges both in modes 1 and 2, as the quantum state in standard quantum theory. However, the Bohmian particle emerges either in mode 1 or in mode 2, depending on its precise initial position. As Bohmian trajectories can’t cross each other, if the initial position is in the lower half of mode “in”, then the Bohmian particle exists the BS in mode 1, else in mode 2.

Two Bohmian particles spread over 4 modes. The quantum state is entangled... hence the two particle are either in modes 1 and 4, or in modes 2 and 3. Alice applies a phase x on mode 1 and Bob a phase y on mode 4. Accordingly, after the two beam-splitters the correlations between the detectors allow Alice and Bob to violate Bell inequality... Alice’s first “measurement”, with phase x, can be undone because in Bohmian mechanics there is no collapse of the wavefunction. Hence, after having applied the phase −x after her second beam-splitter, Alice can perform a second “measurement” with phase x′ .

... There is no doubt that according to Bohmian mechanics there is a well-defined joint probability distribution for Alice’s particle at two times and Bob’s particle: P(rA, r′A, rB|x, x′ , y), where rA denotes Alice’s particle after the first beam-splitter and r′A after the third beamsplitter of {the last figure above}... But here comes the puzzle. According to Assumption H, if rA∈′′1′′, then any position measurement performed by Alice in-between the first and second beam-splitter would necessarily result in a=1. Similarly rA ∈′′2′′ implies a=2. And so on, Alice’s position measurement after the third beam-splitter is determined by r ′ A and Bob’s measurement determined by rB. Hence, it seems that one obtains a joint probability distribution for both of Alice’s measurements results and for Bob’s: P(a, a′ , b|x, x′ , y). 
But such a joint probability distribution implies that Alice doesn’t have to make any choice (she merely makes both choices, one after the other), and in such a situation there can’t be any Bell inequality violation.
... Let’s have a closer look at the probability distribution that lies at the bottom of our puzzle: P(rA, r′ A, rB|x, x′ , y)... now comes the catch... as the Bohmian particles’s positions are assumed to be “hidden”... they have to be hidden in order to avoid signalling in Bohmian mechanics. ... it implies that Bohmian particles are postulated to exist “only” to immediately add that they are ultimately not fully accessible... Consequently, defining a joint probability for the measurement outcomes a, a ′ and b in the natural way: 
P (a, a′ , b|x, x′ , y) ≡ P (rA ∈ “a“, rA ∈ “a ′ “, rB ∈ “b“|x, x′ , y) (10) 
can be done mathematically, but can’t have a physical meaning, as P(a, a′, b|x, x′ , y) would be signaling.
In summary, it is the identification (10) that confused the authors of [1, 2] and led them to wrongly conclude that Bohmian mechanics can’t predict violations of Bell inequalities in experiments involving only position measurements. Note that the identification (10) follows from the assumption H, hence assumption H is wrong. Every introduction to Bohmian mechanics should emphasize this. Indeed, assumption H is very natural and appealing, but wrong and confusing.

To elaborate on this let’s add an explicit position measurement after the first beam-splitter on Alice side. The fact is that both according to standard quantum theory and according to Bohmian mechanics, this position measurement perturbs the quantum state (hence the pilot wave) in such a way that the second measurement, labelled x ′ on Fig. 4, no longer shares the correlation (9) with the first measurement, see [4, 5]...

From all we have seen so far, one should, first of all, recognize that Bohmian mechanics is deeply consistent and provides a nice and explicit existence proof of a deterministic nonlocal hidden variables model. Moreover, the ontology of Bohmian mechanics is pretty straightforward: the set of Bohmian positions is the real stuff. This is especially attractive to philosopher. Understandably so. But what about physicists mostly interested in research? What new physics did Bohmian mechanics teach us in the last 60 years? Here, I believe fair to answer: not enough! Understandably disappointing... 
This is unfortunate because it could inspire courageous ideas to test quantum physics. 


Probably surrealistic Bohm Trajectories in the microscopic world?

... we maintain that Bohmian Mechanics is not needed to have the Schrödinger equation "embedded into a physical theory". Standard quantum theory has already clarified the significance of Schrödinger's wave function as a tool used by theoreticians to arrive at probabilistic predictions. It is quite unnecessary, and indeed dangerous, to attribute any additional "real" meaning to the psi-function. The semantic difference between "inconsistent" and "surrealistic" is not the issue. It is the purpose of our paper to show clearly that the interpretation of the Bohm trajectory - as the real retrodicted history of the atom observed on the screen - is implausible, because this trajectory can be macroscopically at variance with the detected, actual way through the interferometer. And yes, we do have a framework to talk about path detection; it is based upon the local interaction of the atom with the photons inside a resonator, described by standard quantum theory with its short range interactions only. Perhaps it is true that it is "generally conceded that.. . [a measurement]... requires a ... device which is more or less macroscopic," but our paper disproves this notion, because it clearly shows that one degree of freedom per detector is quite sufficient. That is the progress represented by the quantum-optical whichway detectors. And certainly, it is irrelevant for all practical purposes whether "somebody looks" or not; what matters only is that the which-way information is stored somewhere so that the path through the interferometer can be known, in principle.

Nowhere did we claim that BM makes predictions that differ from those of standard quantum mechanics. The whole point of the experimentum crucis is to demonstrate that one cannot attribute reality to the Böhm trajectories, where reality is meant in the phenomenological sense. One must not forget that physics is an experimental science dealing with phenomena. If the trajectories of BM have no relation to the phenomena, in particular to the detected path of the particle, then their reality remains metaphysical, just like the reality of the ether of Maxwellian electrodynamics. Of course, the "very existence" of the Böhm trajectory is a mathematical statement to which nobody objects. We do not deny the possibility that some imaginary parameters possess a "hidden reality" endowed with the assumed power of exerting "gespenstische Fernwirkungen" (Einstein). But a physical theory should carefully avoid such concepts of no phenomenological consequence.  
B.-G. Englert, M. O. Scully, G. Süssmann, and H. Walther
received October 12, 1993  

vendredi 30 décembre 2016

Hoping and believing are different things for physicists

The monster, the second sister and Cinderella : three Magi announcing the era of direct gravitational wave astrometry

On February 11 the LIGO-Virgo collaboration announced the detection of Gravitational Waves (GW). They were emitted about one billion years ago by a Binary Black Hole (BBH) merger and reached Earth on September 14, 2015. The claim, as it appears in the ‘discovery paper’ [1] and stressed in press releases and seminars, was based on “> 5.1 σ significance.” Ironically, shortly after, on March 7 the American Statistical Association (ASA) came out (independently) with a strong statement warning scientists about interpretation and misuse of p-values [2]...
In June we have finally learned [4] that another ‘one and a half ’ gravitational waves from Binary Black Hole mergers were also observed in 2015, where by the ‘half’ I refer to the October 12 event, highly believed by the collaboration to be a gravitational wave, although having only 1.7 σ significance and therefore classified just as LVT (LIGO-Virgo Trigger) instead of GW. However, another figure of merit has been provided by the collaboration for each event, a number based on probability theory and that tells how much we must modify the relative beliefs of two alternative hypotheses in the light of the experimental information. This number, at my knowledge never even mentioned in press releases or seminars to large audiences, is the Bayes factor (BF), whose meaning is easily explained: if you considered a priori two alternative hypotheses equally likely, a BF of 100 changes your odds to 100 to 1; if instead you considered one hypothesis rather unlikely, let us say your odds were 1 to 100, a BF of 104 turns them the other way around, that is 100 to 1. You will be amazed to learn that even the “1.7 sigma” LVT151012 has a BF of the order of ≈ 1010 , considered a very strong evidence in favor of the hypothesis “Binary Black Hole merger” against the alternative hypothesis “Noise”. (Alan Turing would have called the evidence provided by such an huge ‘Bayes factor,’ or what I. J. Good would have preferred to call “Bayes-Turing factor” [5],1 100 deciban, well above the 17 deciban threshold considered by the team at Bletchley Park during World War II to be reasonably confident of having cracked the daily Enigma key [7].)...
Figure 3: The Monster (GW150914), Cinderella (LVT151012) and the third sister (GW151226), visiting us in 2015 (Fig. 1 of [4] – see text for the reason of the names). The published ‘significance’ of the three events (Table 1 of [4]) is, in the order, “> 5.3 σ”, “1.7 σ” and “> 5.3 σ”, corresponding to the following p-values: 7.5 × 10-8 , 0.045, 7.5 × 10-8. The log of the Bayes factors are instead (Table 4 of [4]) approximately 289, 23 and 60, corresponding to Bayes factors about 3 × 10125 , 1010 and 1026

... even if at a first sight it does not look dissimilar from GW151226 (but remember that the waves in Fig. 3 do not show raw data!), the October 12 event, hereafter referred as Cinderella, is not ranked as GW, but, more modestly, as LVT, for LIGO-Virgo Trigger. The reason of the downgrading is that ‘she’ cannot wear a “> 5σ’s dress” to go together with the ‘sisters’ to the ‘sumptuous ball of the Establishment.’ In fact Chance has assigned ‘her’ only a poor, unpresentable 1.7 σ ranking, usually considered in the Particle Physics community not even worth a mention in a parallel session of a minor conference by an undergraduate student. But, despite the modest ‘statistical significance’, experts are highly confident, because of physics reasons* (and of their understanding of background), that this is also a gravitational wave radiated by a BBH merger, much more than the 87% quoted in [4]. [Detecting something that has good reason to exist , because of our understanding of the Physical World (related to a network of other experimental facts and theories connecting them!), is quite different from just observing an unexpected bump, possibly due to background, even if with small probability, as already commented in footnote 15. And remember that whatever we observe in real life, if seen with high enough resolution in the N-dimensional phase space, had very small probability to occur! (imagine, as a simplified example, the pixel content of any picture you take walking on the road, in which N is equal to five, i.e two plus the RGB code of each pixel).]
Giulio D'Agostini (Submitted on 6 Sep 2016)


Will the first 5-sigma claim from LHC Run2 be a fluke?
In the meanwhile it seems that particle physicists are hard in learning the lesson and the number of graves in the Cemetery of physics ... has increased ..., the last funeral being recently celebrated in Chicago on August 5, with the following obituary for the dear departed: “The intriguing hint of a possible resonance at 750 GeV decaying into photon pairs, which caused considerable interest from the 2015 data, has not reappeared in the much larger 2016 data set and thus appears to be a statistical fluctuation” [57]. And de Rujula’s dictum gets corroborated. [If you disbelieve every result presented as having a 3 sigma, or ‘equivalently’ a 99.7% chance of being correct, you will turn out to be right 99.7% of the times. (‘Equivalently’ within quote marks is de Rujula’s original, because he knows very well that there is no equivalence at all.)] Someone would argue that this incident has happened because the sigmas were only about three and not five. But it is not a question of sigmas, but of Physics, as it can be understood by those who in 2012 incorrectly turned the 5σ into 99,99994% “discovery probability” for the Higgs [58], while in 2016 are sceptical in front of a 6σ claim (“if I have to bet, my money is on the fact that the result will not survive the verifications” [59]): the famous “du sublime au ridicule, il n’y a qu’un pas” seems really appropriate! ... 
Seriously, the question is indeed that, now that predictions of New Physics around what should have been a natural scale substantially all failed, the only ‘sure’ scale I can see seems Planck’s scale. I really hope that LHC will surprise us, but hoping and believing are different things. And, since I have the impression that are too many nervous people around, both among experimentalists and theorists, and because the number of possible histograms to look at is quite large, after the easy bets of the past years (against CDF peak and against superluminar neutrinos in 2011; in favor of the Higgs boson in 2011; against the 750 GeV di-photon in 2015, not to mention that against Supersymmetry going on since it failed to predict new phenomenology below the Z0 – or the W? – mass at LEP, thus inducing me more than twenty years ago to gave away all SUSY Monte Carlo generators I had developed in order to optimize the performances of the HERA detectors.) I can serenely bet, as I keep saying since July 2012, that the first 5-sigma claim from LHC will be a fluke. (I have instead little to comment on the sociology of the Particle Physics theory community and on the validity of ‘objective’ criteria to rank scientific value and productivity, being the situation self evident from the hundreds of references in a review paper which even had in the front page a fake PDG entry for the particle [60] and other amenities you can find on the web, like [61].)
Id.

Bayesian anatomy of the 750 GeV fluke 
The statistical anomalies at about 750 GeV in ATLAS [1, 2] and CMS [3, 4] searches for a diphoton resonance (denoted in this text as F {for digamma}) at √s = 13 TeV with about 3/fb caused considerable activity (see e.g., Ref. [5, 6, 7]). The experiments reported local significances, which incorporate a look-elsewhere effect (LEE, see e.g., Ref. [8, 9]) in the production cross section of the z, of 3.9σ and 3.4σ, respectively, and global significances, which incorporate a LEE in the production cross section, mass and width of the F, of 2.1σ and 1.6σ, respectively. There was concern, however, that an overall LEE, accounting for the numerous hypothesis tests of the SM at the LHC, cannot be incorporated, and that the plausibility of the F was difficult to gauge. 
Whilst ultimately the F was disfavoured by searches with about 15/fb [10, 11], we directly calculate the relative plausibility of the SM versus the SM plus F in light of ATLAS data available during the excitement, matching, wherever possible, parameter ranges and parameterisations in the frequentist analyses. The relative plausibility sidesteps technicalities about the LEE and the frequentist formalism required to interpret significances. We calculate the Bayes-factor (see e.g., Ref. [12]) in light of ATLAS data, 
Our main result is that we find that, at its peak, the Bayes-factor was about 7.7 in favour of the F. In other words, in light of the ATLAS 13 TeV 3.2/fb and 8 TeV 20.3/fb diphoton searches, the relative plausibility of the F versus the SM alone increased by about eight. This was “substantial” on the Jeffreys’ scale [13], lying between “not worth more than a bare mention” and “strong evidence.” For completeness, we calculated that this preference was reversed by the ATLAS 13 TeV 15.4/fb search [11], resulting in a Bayes-factor of about 0.7. Nevertheless, the interest in F models in the interim was, to some degree, supported by Bayesian and frequentist analyses. Unfortunately, CMS performed searches in numerous event categories, resulting in a proliferation of background nuisance parameters and making replication difficult without cutting corners or considerable computing power.
Andrew Fowlie  (Submitted on 22 Jul 2016 (v1), last revised 6 Dec 2016 (this version, v2))

Taking with a grain of salt the three black hole merger Magi 2016 story?
The analysis of GW150914 shows that the initial black hole masses are 36M and 29M [1], which are heavier than the previous known stellar-mass black holes[2]. In the newly announced black hole merger event, GW151226 [3], the initial black hole masses are about 14M and 8M , which fall into the known mass range of stellar black holes... It seems to make the picture of binary black hole merger and gravitational wave observation more reliable because the signals of GW150914 and GW151226 are extracted from noise by the same methods [4, 5].  
However, we notice the response of a detector to gravitational wave is a function of frequency. When the time a photon moving around in the Fabry-Perot cavities is the same order of the period of a gravitational wave, the phase-difference due to the gravitational wave should be an integral along the path. In fact, this propagation effect on Michelson detector response was addressed, for example, in [6]. Unfortunately, the propagation effect on Fabry-Perot detector response has not been considered properly. 
In the manuscript, we try to take into {account?} the propagation effect of the gravitational wave and reexamine the LIGO data. We find that when the average time a photon staying in the Fabry-Perot cavities in two arms is the same order {of as} the period of a gravitational wave, the phase-difference of a photon in the two arms due to the gravitational wave may be cancelled. In the case of observation for GW151226, the average time of a photon staying in the detector is longer than the period of the gravitational wave at maximum gravitational radiation. When the propagation effect is taken into account, the claimed signal GW151226 almost disappears
The green line in the top panel is the response of detector to the best fit template for GW151226 provided on LIGO website[9].When the propagation effect is taken into account {taking into account the Fabry-Perot detector response?}, the detector response to the gravitational wave in the template becomes the form of the blue line. The bottom panel presents the variation of frequency in time for gravitational wave.
For LIGO detectors, the lengths of Fabry-Perot cavities are L ≈ 4 km. On average, a photon travels in cavities 140 round trips [1]. It will move back and forth in cavities for about 0.0037 s. In that period, the gravitational wave with frequency 268 Hz has propagated the distance of one wavelength. Therefore, the above propagation effect should be taken into account in the analysis of GW151226 because the frequency of the peak gravitational strain is about 450 Hz (> 268 Hz). For the low-frequency gravitational wave the propagation effect is small. So the signal for GW150914 is not affected a lot. 
It should be remarked that there is the subtle difference between the effect of a gravitational wave on the light traveling in a detector and the phase variation due to the vibration of mirrors which has been used in the calibration of LIGO’s detectors[11] though both the vibration of mirrors and the incidence of a gravitational wave will modify the phase of a light travels in the cavities. The vibrations of the mirrors modify the phase of a light when the photons travel near the vibrating mirrors. The phase shift of a light beyond the vibration region will not be affected by the vibrating mirrors. In contrast, a gravitational wave affects the phase of a light at every place in the cavities. As the result, the average phase variations due to the vibrating mirrors do not vanish even when the time of a round trip of a photon in a cavity is the same as the period of the vibration of the end mirrors. But, it will vanish in the gravitational wave background when the time of a round trip for a photon is the same as the period of a gravitational wave. Therefore, the propagation effect of gravitational wave is not included in the calibration, which is calibrated with the help of the vibrating mirrors.
Zhe Chang, Chao-Guang Huang, Zhi-Chao Zhao (Submitted on 6 Dec 2016)

mercredi 9 novembre 2016

[Today the world is trumper than yesterday!, Est ce que le monde d'hier était moins trompeur qu'aujourd'hui ?]

Yesterday forecast for the 2016 american presidential election
from projects.fivethirtyeight.com/2016-election-forecast

Today projection after the vote
Beware the color labels for Trump and Clinton in the following is opposite to the last graphic!

from uselectionatlas.org/RESULTS (November 9)

Last comment (November 19)
Will Trump victory make the USA a more obvious plutocracy?
Here are the last (final?) results :

from uselectionatlas.org/RESULTS (November 19)



dimanche 6 novembre 2016

[There, is] plenty of room for new phases at high pressure [!,?]

No comment

Evidence for a new phase of dense hydrogen above 325 gigapascals
Philip Dalladay-Simpson, Ross T. Howie & Eugene Gregoryanz
Nature 529, 63–67 (07 January 2016)
Almost 80 years ago it was predicted that, under sufficient compression, the H–H bond in molecular hydrogen (H2) would break, forming a new, atomic, metallic, solid state of hydrogen. Reaching this predicted state experimentally has been one of the principal goals in high-pressure research for the past 30 years. Here, using in situ high-pressure Raman spectroscopy, we present evidence that at pressures greater than 325 gigapascals at 300 kelvin, H2 and hydrogen deuteride (HD) transform to a new phase—phase V. This new phase of hydrogen is characterized by substantial weakening of the vibrational Raman activity, a change in pressure dependence of the fundamental vibrational frequency and partial loss of the low-frequency excitations. We map out the domain in pressure–temperature space of the suggested phase V in H2 and HD up to 388 gigapascals at 300 kelvin, and up to 465 kelvin at 350 gigapascals; we do not observe phase V in deuterium (D2). However, we show that the transformation to phase IV′ in D2 occurs above 310 gigapascals and 300 kelvin. These values represent the largest known isotropic shift in pressure, and hence the largest possible pressure difference between the H2 and D2 phases, which implies that the appearance of phase V of D2 must occur at a pressure of above 380 gigapascals. These experimental data provide a glimpse of the physical properties of dense hydrogen above 325 gigapascals and constrain the pressure and temperature conditions at which the new phase exists. We speculate that phase V may be the precursor to the non-molecular (atomic and metallic) state of hydrogen that was predicted 80 years ago.


New low temperature phase in dense hydrogen: The phase diagram to 421 GPa
Ranga Dias, Ori Noked, Isaac F. Silvera
(Submitted on 7 Mar 2016 (v1), last revised 26 May 2016 (this version, v2))
In the quest to make metallic hydrogen at low temperatures a rich number of new phases have been found and the highest pressure ones have somewhat flat phase lines, around room temperature. We have studied hydrogen to static pressures of GPa in a diamond anvil cell and down to liquid helium temperatures, using infrared spectroscopy. We report a new phase at a pressure of GPa and T=5 K. Although we observe strong darkening of the sample in the visible, we have no evidence that this phase is metallic hydrogen.


No "Evidence for a new phase of dense hydrogen above 325 GPa"
Ranga P. Dias, Ori Noked, Isaac F. Silvera
(Submitted on 18 May 2016)
In recent years there has been intense experimental activity to observe solid metallic hydrogen. Wigner and Huntington predicted that under extreme pressures insulating molecular hydrogen would dissociate and transition to atomic metallic hydrogen. Recently Dalladay-Simpson, Howie, and Gregoryanz reported a phase transition to an insulating phase in molecular hydrogen at a pressure of 325 GPa and 300 K. Because of its scientific importance we have scrutinized their experimental evidence to determine if their claim is justified. Based on our analysis, we conclude that they have misinterpreted their data: there is no evidence for a phase transition at 325 GPa.




Nature of the Metallization Transition in Solid Hydrogen
Sam Azadi, N. D. Drummond, W. M. C. Foulkes
(Submitted on 2 Aug 2016)
Determining the metalization pressure of solid hydrogen is one of the great challenges of high-pressure physics. Since 1935, when it was predicted that molecular solid hydrogen would become a metallic atomic crystal at 25 GPa [1], compressed hydrogen has been studied intensively. Additional interest arises from the possible existence of room-temperature superconductivity [2], a metallic liquid ground state [3], and the relevance of solid hydrogen to astrophysics [4, 5].  
Early spectroscopic measurements at low temperature suggested the existence of three solid-hydrogen phases [4]. Phase I, which is stable up to 110 GPa, is a molecular solid composed of quantum rotors arranged in a hexagonal close-packed structure. Changes in the low-frequency regions of the Raman and infrared spectra imply the existence of phase II, also known as the broken-symmetry phase, above 110 GPa. The appearance of phase III at 150 GPa is accompanied by a large discontinuity in the Raman spectrum and a strong rise in the spectral weight of molecular vibrons. Phase IV, characterized by the two vibrons in its Raman spectrum, was discovered at 300 K and pressures above 230 GPa [6–8]. Another new phase has been claimed to exist at pressures above 200 GPa and higher temperatures (for example, 480 K at 255 GPa) [9]. This phase is thought to meet phases I and IV at a triple point, near which hydrogen retains its molecular character. The most recent experimental results [10] indicate that H2 and hydrogen deuteride at 300 K and pressures greater than 325 GPa transform to a new phase V, characterized by substantial weakening of the vibrational Raman activity. Other features include a change in the pressure dependence of the fundamental vibrational frequency and the partial loss of the low-frequency excitations.  
Although it is very difficult to reach the hydrostatic pressure of more than 400 GPa at which hydrogen is normally expected to metalize, some experimental results have been interpreted as indicating metalization at room temperature below 300 GPa [6]. However, other experiments show no evidence of the optical conductivity expected of a metal at any temperature up to the highest pressures explored [11]. Experimentally, it remains unclear whether or not the molecular phases III and IV are metallic, although it has been suggested that phase V may be non-molecular (atomic) [10]. Metalization is believed to occur either via the dissociation of hydrogen molecules and a structural transformation to an atomic metallic phase [6, 12], or via band-gap closure within the molecular phase [13, 14]. In this work we investigate the latter possibility using advanced computational electronic structure methods.
Structures of crystalline materials are normally determined by X-ray or neutron diffraction methods. These techniques are very challenging for low-atomic-number elements such as hydrogen [15]. Fortunately optical phonon modes disappear, appear, or experience sudden shifts in frequency when the crystal structure changes. It is therefore possible to identify the transitions between phases using optical methods.


(Submitted on 5 Oct 2016)
We have studied solid hydrogen under pressure at low temperatures. With increasing pressure we observe changes in the sample, going from transparent, to black, to a reflective metal, the latter studied at a pressure of 495 GPa. We have measured the reflectance as a function of wavelength in the visible spectrum finding values as high as 0.90 from the metallic hydrogen. We have fit the reflectance using a Drude free electron model to determine the plasma frequency of 30.1 eV at T= 5.5 K, with a corresponding electron carrier density of 6.7x1023 particles/cm3 , consistent with theoretical estimates. The properties are those of a metal. Solid metallic hydrogen has been produced in the laboratory