samedi 31 janvier 2015

First, a (too) spectacular claim, then a spectacular {statistically} insignificant result!

First detection of inflationary gravitational waves probably did not occur in 2014

At the recombination epoch, the inflationary gravitational waves (IGW) contribute to the anisotropy of the CMB in both total intensity and linear polarization. The amplitude of tensors is conventionally parameterized by r, the tensor-to-scalar ratio at a fiducial scale. Theoretical predictions of the value of r cover a very wide range. Conversely, a measurement of r can discriminate between models of inflation. Tensor modes produce a small increment in the temperature anisotropy power spectrum over the standard [cosmological model] ΛCDM scalar perturbations at multipoles l<∼60; measuring this increment requires the large sky coverage traditionally achieved by space-based experiments, and an understanding of the other cosmological parameters. The effects of tensor perturbations on B-mode polarization is less ambiguous than on temperature or E-mode polarization over the range l<∼150...
Interstellar dust grains produce thermal emission, the brightness of which increases rapidly from the 100– 150 GHz frequencies favored for CMB observations, becoming dominant at ≥ 350 GHz even at high galactic latitude. The dust grains align with the Galactic magnetic field to produce emission with a degree of linear polarization [16]. The observed degree of polarization depends on the structure of the Galactic magnetic field along the line of sight, as well as the properties of the dust grains (see for example Refs. [17, 18]). This polarized dust emission results in both E-mode and B-mode, and acts as a potential contaminant to a measurement of r. Galactic dust polarization was detected by Archeops [19] at 353 GHz and by WMAP [2, 20] at 90 GHz. 
BICEP2 was a specialized, low angular resolution experiment, which operated from the South Pole from 2010 to 2012, concentrating 150 GHz sensitivity comparable to Planck on a roughly 1 % patch of sky at high Galactic latitude [21]. The BICEP2 Collaboration published a highly significant detection of B-mode polarization in excess of the r=0 lensed-ΛCDM expectation over the range 30 < l<150 in Ref. [22...]. Modest evidence against a thermal Galactic dust component dominating the observed signal was presented based on the cross-spectrum against 100 GHz maps from the previous BICEP1 experiment. The detected B-mode level was higher than that projected by several existing dust models [23, 24] although these did not claim any high degree of reliability.  
The Planck survey released information on the structure of the dust polarization sky at intermediate latitudes [25], and the frequency dependence of the polarized dust emission at frequencies relevant to CMB studies [26]. Other papers argued that the BICEP2 region is significantly contaminated by dust [27, 28]. Finally Planck released information on dust polarization at high latitude [29, hereafter PIP-XXX], and in particular examined a field centered on the BICEP2 region (but somewhat larger than it) finding a level of polarized dust emission at 353 GHz sufficient to explain the 150 GHz excess observed by BICEP2, although with relatively low signal-to-noise. [...] 
In this paper, we take cross-spectra between the joint BICEP2/Keck maps and all the polarized bands of Planck. [...]


Upper: BB spectrum of the BICEP2/Keck maps before and after subtraction of the dust contribution, estimated from the cross-spectrum with Planck 353 GHz. The error bars are the standard deviations of simulations, which, in the latter case, have been scaled and combined in the same way. The inner error bars are from lensed-ΛCDM+noise simulations as in the previous plots, while the outer error bars are from the lensed-ΛCDM+noise+dust simulations. Lower: constraint on r derived from the cleaned spectrum compared to the fiducial analysis shown in Figure 6.


[...] The r constraint curve peaks at r = 0.05 but disfavors zero only by a factor of 2.5. This is expected by chance 8% of the time, as confirmed in simulations of a dust-only model. We emphasize that this significance is too low to be interpreted as a detection of primordial B-modes. [...] 
In order to further constrain or detect IGW, additional data are required. The Planck Collaboration may be able to make progress alone using the large angular scale “reionization bump,” if systematics can be appropriately controlled [50]. To take small patch “recombination bump” studies of the type pursued here to the next level, data with signal-to-noise comparable to that achieved by BICEP2/Keck at 150 GHz are required at more than one frequency... During the 2014 season, two of the Keck Array receivers observed in the 95 GHz band and these data are under active analysis. BICEP3 will add substantial additional sensitivity at 95 GHz in the 2015, and especially 2016, seasons. Meanwhile many other ground-based and sub-orbital experiments are making measurements at a variety of frequencies and sky coverage fractions.
DataBICEP2/Keck and Planck Collaborations
30 January 2015




jeudi 22 janvier 2015

2015 : The Planck(-Bronstein) mass [concept] has more than 100 {only} 80 years

History of science can teach us something
Here is a long excerpt from a text by the researcher in philosophy and history of science Gennady Gorelik (available online) which illustrate the statement in the title:

Planck introduced his cGh values in 1899, without any connection to quantum gravity. Quantum limits to the applicability of general relativity (and, implicitly, their Planck scale) were first discovered in 1935 by the Soviet theorist Matvey P. Bronstein (1906-1938). It was not until the 1950s that the explicitly quantum-gravitational significance of the Planck values was pointed out almost simultaneously by several physicists.[...] In the fifth installment of his continuing study of irreversible radiation processes (Planck 1899), Max Planck introduced two new universal physical constants, a and b, and calculated their values from experimental data. The following year, he redesignated the constant b by the famous letter h(and in place of a, he introduced k = b/a, the Boltzmann constant).  
In 1899, the constant b (that is, h) did not yet have any quantum theoretical significance, having been introduced merely in order to derive Wien's formula for the energy distribution in the black-body spectrum. However, Planck had previously described this constant as universal. During the six years of his efforts to solve the problem of the equilibrium between matter and radiation, he clearly understood the fundamental, universal character of the sought-for spectral distribution. 
It was perhaps this universal character of the new constant that stimulated Planck, in that same paper of 1899, to consider a question that was not directly connected with the paper's main theme. The last section of the paper is entitled "Natural Units of Measure" ["Natürliche Maasseinheiten"]. Planck noted that in all ordinary systems of units, the choice of the basic units is made not from a general point of view "necessary for all places and times," but is determined solely by "the special needs of our terrestrial culture" (Planck 1899, p. 479). Then, basing himself upon the new constant h and also upon c and G, Planck suggested the establishment of

"units of length, mass, time, and temperature that would, independently of special bodies and substances, necessarily retain their significance for all times and all cultures, even extraterrestrial and extrahuman ones, and which may therefore be designated as natural units of measure." (Planck 1899, pp. 479-480)
[...]The quantum-gravitational meaning of the Planck values could be revealed only after a relativistic theory of gravitation had been developed. As soon as that was done, Einstein pointed out the necessity of unifying the new theory of gravitation with quantum theory. In 1916, having obtained the formula for the intensity of gravitational waves, he remarked:
"Because of the intra-atomic movement of electrons, the atom must radiate not only electromagnetic but also gravitational energy, if only in minute amounts. Since, in reality, this cannot be the case in nature, then it appears that the quantum theory must modify not only Maxwell's electrodynamics but also the new theory of gravitation." (Einstein 1916, p. 696).
For two decades after Einstein pointed out the necessity of a quantum-gravitational theory in 1916, only a few remarks about this subject appeared. There were too many other more pressing theoretical problems (including quantum mechanics, quantum electrodynamics, and nuclear theory). And, the remarks that were made were too superficial, which is to say that they assumed too strong an analogy between gravity and electromagnetism. For example, after discussing a general scheme for field quantization in their famous 1929 paper, Heisenberg and Pauli wrote:
"One should mention that a quantization of the gravitational field, which appears to be necessary for physical reasons, may be carried out without any new difficulties by means of a formalism wholly analogous to that applied here. (Heisenberg and Pauli 1929, p. 3)"
They grounded the necessity of a quantum theory of gravitation on Einstein's mentioned remark of 1916 and on Oskar Klein's remarks in an article of 1927 in which he pointed out the necessity of a unified description of gravitational and electromagnetic waves, one taking into account Planck's constant h. 
Heisenberg and Pauli obviously intended that quantization techniques be applied to the linearized equations of the (weak) gravitational field (obtained by Einstein in 1916). Being clearly approximative, this approach allows one to hope for an analogy with electromagnetism, but it also allows one to disregard some of the distinguishing properties of gravitation—its geometrical essence and its nonlinearity. Just such an approach was employed by Leon Rosenfeld, who considered a system of quantized electromagnetic and weak gravitational fields (Rosenfeld 1930), studying the mutual transformations of light and "gravitational quanta" (a term that he was the first to use). 
The first really profound investigation of the quantization of the gravitational field was undertaken by Matvey P. Bronstein. The essential results of his 1935 dissertation, entitled "The Quantization of Gravitational Waves," were contained in two papers published in 1936. The dissertation was mainly devoted to the case of the weak gravitational field, where it is possible to ignore the geometrical character of gravitation, that is, the curvature of space-time. However, Bronstein's work also contained an important analysis revealing the essential difference between quantum electrodynamics and a quantum theory of gravity not thus restricted to weak fields and "nongeometricness." This analysis demonstrated that the ordinary scheme of quantum field theory and the ordinary concepts of Riemannian geometry are not sufficient for the formulation of a consistent theory of quantum gravity. At the same time, Bronstein's analysis led to the limits of quantum-gravitational physics (and to Planck's cGh-values). [...] 
For two decades after Bronstein's work, there was calm in the field of quantum gravity. Only in the mid-1950s did the length l0 = (Gh/c3)1/2 appear almost simultaneously in a few different forms in a few papers. For example, in 1954, Landau pointed out that the length l= G1/2h/ce (= a-1/2l0, very near to the Planck length) is "the limit of the region outside of which quantum electrodynamics cannot be considered as a self-consistent theory because of the necessity of taking into account gravitational interactions" (Gm^2/r ~ e2/r, when m ~ p/c ~ h/lc) (Abrikosov, Landau, Khalatnikov 1954).[...] 
The term "Planck values," which is now generally accepted, was introduced later (Misner and Wheeler 1957). According to Wheeler, he did not know in 1955 about Planck's "natural units" (private communication).
A history of the Planck values provides interesting material for reflections on timely and premature discoveries in the history of science. Today, the Planck values are more a part of physics itself than of its history. They are mentioned in connection with the cosmology of the early universe as well as in connection with particle physics. In considering certain problems associated with a unified theory (including the question of the stability of the proton), theorists discovered a characteristic mass ~ 1016mp (mpis the proton mass). To ground such a great value, one first refers to the still greater mass 1019mp. In the words of Steven Weinberg:
"This is known as the Planck mass, after Max Planck, who noted in 1900 that some such mass would appear naturally in any attempt to combine his quantum theory with the theory of gravitation. The Planck mass is roughly the energy at which the gravitational force between particles becomes stronger than the electroweak or the strong forces. In order to avoid an inconsistency between quantum mechanics and general relativity, some new features must enter physics at some energy at or below 1019 proton masses." (Weinberg 1981, p. 71).
The fact that Weinberg takes such liberties with history in this quotation is evidence of the need to describe the real historical circumstances in which the Planck mass arose. As we saw, when Planck introduced the mass (ch/G)1/2 (~1019mp) in 1899, he did not intend to combine the theory of gravitation with quantum theory; he did not even suppose that his new constant would result in a new physical theory. The first "attempt to combine the quantum theory with the theory of gravitation," which demonstrated that "in order to avoid an inconsistency between quantum mechanics and general relativity, some new features must enter physics," was made by Bronstein in 1935. That the Planck mass may be regarded as a quantum-gravitational scale was pointed out explicitly by Klein and Wheeler twenty years later. At the same time, Landau also noted that the Planck energy (mass) corresponds to an equality of gravitational and electromagnetic interactions.
by Gennady Gorelik (1992)
Studies in the history of general relativity. [Einstein Studies. Vol.3].


To know more about Matvei Bronstein,  (an Ettore Majorana who came in from the cold, so to speak)

lundi 19 janvier 2015

Qu'est ce que mesure vraiment le nombre effectif de neutrinos dans le modèle cosmologique standard dit de concordance ?

Réponse de blogueur et commentaires d'experts 
Encore un billet sur les neutrinos ! Oui ça commence à faire beaucoup. On en parlait dans le billet précédent et il y a plus longtemps ici ou  -  mais que voulez-vous chers lecteurs-trices se sont les seules particules détectables à ce jour dont les propriétés nous permettent de sonder la physique au delà du Modèle Standard, brefs les seuls messagers fiables  porteurs d('un)e nouvelle( phy)s(iques) à se mettre sous la dent (pour le moment bien sûr). Alors il est toujours utile d'y revenir, particulièrement lorsque la lecture d'un billet de blog spécialisé (et de ses commentaires) comme ci-dessous nous permet de découvrir clairement les hypothèses implicites des modèles explicatifs de certaines données expérimentales pas toujours très explicites :
One interesting result [from the Planck satellite] is the new improved constraint on the effective number of neutrinos, Neff in short. The way this result is presented may be confusing. We know perfectly well there are exactly 3 light active (interacting via weak force) neutrinos; this has been established in the 90s at the LEP collider, and Planck has little to add in this respect. Heavy neutrinos, whether active or sterile, would not show in this measurement at all. For light sterile neutrinos, Neff implies an upper bound on the mixing angle with the active ones. The real importance of Neff lies in that it counts any light particles (other than photons) contributing to the energy density of the universe at the time of CMB decoupling. Outside the standard model neutrinos, other theorized particles could contribute any real positive number to Neff, depending on their temperature and spin. A few years ago there have been consistent hints of Neff much larger 3, which would imply physics beyond the standard model. Alas, Planck has shot down these claims. The latest number combining Planck and Baryon Acoustic Oscillations is Neff =3.04±0.18, spot on 3.046 expected from the standard model neutrinos. This represents an important constraint on any new physics model with very light (less than eV) particles.
Jester, Blog Résonaances,  Saturday, 13 December 2014
Anonymous comment:
Note that the standard value of N_eff = 3.046 for 3 active neutrinos relies on several assumptions:
*) 3 active neutrinos in the Standard Model
*) No partly thermalised lights species
*) Reheating happened above ~4 MeV
*) No entropy production between 1 MeV and today
*) No cooling of photons between 1 MeV and today (e.g. through Dark Sector mixing)
Only the first assumption was verified by LEP, so there were plenty of room for N_eff to be different from 3.046.

Jester reply:
I completely agree, maybe except that "Reheating happened above ~4 MeV" is independently confirmed by nucleosynthesis. I didn't mean that Neff is not useful, on the contrary. I meant that it is often presented as a measurement of the number of neutrinos, which may be misleading.

En attendant le retour du lièvre continuons à écouter se que nous raconte la tortue 
Autrement dit, approfondissons patiemment nos connaissances sur le rôle des neutrinos dans l'astrophysique des débuts de l'univers vue depuis nos modernes spectro-télescopes collecteurs-analyseurs de photons ayant voyagés plus de dix milliards d'années ... avant que les physiciens des particules ne relancent leur accélérateur géant en espérant trouver rapidement des réponses à leurs questions dans la collision-désintégration de quelques paires quark-antiquark ou gluons-gluons en une fraction de seconde (ou presque).
Physics Beyond the Standard Models (BSMs), i.e. beyond Electro-Weak Model and beyond Standard Cosmological Model (... also called λ Cold Dark Matter model) is required for the explanation of the astrophysical and cosmological observational data. Namely, the contemporary SCM, contains considerable BSMs components - the so called dark energy (DE) and dark matter (DM), both with yet unrevealed nature, alas. These constitute 96% of the universe matter today, and play a considerable role at the matter dominated epoch, i.e. at later stages of the Universe evolution! 
BSMs physics is needed also for revealing the nature and the characteristics of the inflaton (the particle/field responsible for inflationary expansion stage) and CP-violation (CPV) or/and B[aryon number]-violation (BV) mechanisms. These are expected necessary ingredients in the theories of inflation and baryon asymmetry generation, which are the most widely accepted today hypotheses providing natural explanations of numerous intriguing observational characteristics of our universe. 
The inflationary theory explains naturally and elegantly the initial conditions of the universe in the pre-Friedmann epoch, namely: the extraordinary homogeneity and isotropy at large scales of the universe at its present epoch; its unique isotropy at the Cosmic Microwave Background (CMB) formation epoch (when the universe was ∼ 380000 years old); its unique flatness and the pattern of structures it has. Besides, the inflationary early stage explains the lack of topological defects in the universe. While the baryon asymmetry generation models explain the locally observed matter-antimatter asymmetry of the universe. 
...we have been already the lucky witnesses of the experimental establishment of the BSM physics in the neutrino sector. Experimental data on neutrino oscillations firmly determined three neutrino mixing angles and three mass differences, corresponding to the existence of at least two non-zero neutrino masses. The concrete neutrino mass pattern and possible CPV mechanism are to be detected in near future. Thus, the neutrino experimental data ruled out the Standard Models assumptions about zero neutrino masses and mixing and about flavor lepton number (L) conservation. Cosmology provides complementary knowledge about neutrino and BSM physics in the neutrino sector, because neutrino had a considerable influence on the processes during different epochs of the universe evolution. At the hot early universe stage, radiation dominated (RD) stage, light neutrinos were essential ingredients of the universe density, determining the dynamics of the universe.
Neutrinos played also an essential role in different processes as for example Big Bang nucleosynthesis (BBN). In particular, electron neutrino participated in the pre-BBN neutron-proton transitions, that took place during the first seconds, and nucleons freezing, and thus they influenced considerably the primordial production of the light elements (BBN) during the first minutes of the universe. Hence, BBN is very sensitive to the number of the light neutrino types, neutrino characteristics, neutrino chemical potentials, the possible presence of sterile neutrino, etc. BBN is capable to differentiate different neutrino flavors, because νe participates into proton-neutron transitions in the pre-BBN epoch, essential for yields of the primordially produced elements, while νµ and ντ do not exert kinetic effect on BBN 
At later stages of the universe evolution (T < eV) relic neutrinos, contributing to the matter density, influenced CMB anisotropies, played a role in the formation of galaxies and their structures. CMB and Least Scattering Surface (LSS), being sensitive to the total neutrino density and provide information about the neutrino masses and number of neutrino species. Hence, although the relic neutrinos, called Cosmic Neutrino Background (CNB) are not yet directly detected, strong observational evidence for CNB and stringent cosmological constraints on relic neutrino characteristics exist from BBN, CMB and LSS data. In particular, the determinations of light elements abundances and BBN theory predictions are used to put stringent constraints on neutrino characteristics (the effective number of relativistic particles, lepton asymmetry, sterile neutrino characteristics, neutrino mass differences and mixings) while CMB and LSS data provide constraints on neutrino masses and neutrino number density corresponding to CMB and LSS formation epochs.
Daniela Kirilova (Submitted on 7 Jul 2014)